Главная
Study mode:
on
1
Enable Generative AI Everywhere with Ubiquitous Hardware and Open Software - Guobing Chen, Intel
Description:
Explore optimization techniques for Generative AI and Large Language Models (LLMs) in this informative conference talk. Learn about strategies to reduce inference latency and improve performance, including low precision inference, Flash Attention, Efficient Attention in scaled dot product attention (SDPA), optimized KV cache access, and Kernel Fusion. Discover how these optimizations, implemented within PyTorch and Intel Extension for PyTorch, can significantly enhance model efficiency on CPU servers with 4th generation Intel Xeon Scalable Processors. Gain insights into scaling up and out model inference on multiple devices using Tensor Parallel techniques, enabling the deployment of generative AI across various hardware configurations.

Enable Generative AI Everywhere with Ubiquitous Hardware and Open Software

Linux Foundation
Add to list