Leaner, Greener and Faster Pytorch Inference with Quantization
Description:
Discover the power of quantization in PyTorch for optimizing neural networks in this comprehensive conference talk. Learn how to transform FP32 parameters into integers without sacrificing accuracy, resulting in leaner, greener, and faster models. Explore the fundamentals of quantization, its implementation in PyTorch, and various approaches available. Gain insights into the benefits and potential pitfalls of each method, enabling informed decision-making for specific use cases. Follow along as the speaker demonstrates the application of quantization techniques on a large non-academic model, showcasing real-world effectiveness. Presented by Suraj Subramanian, a developer advocate and ML engineer at Meta AI, this talk offers valuable knowledge for enhancing PyTorch inference performance.
Leaner, Greener and Faster PyTorch Inference with Quantization