Главная
Study mode:
on
1
Optimizing Load Balancing and Autoscaling for Large Language Model (LLM) Inference on... David Gray
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Learn essential techniques for optimizing Large Language Model (LLM) inference deployments in a comprehensive conference talk that explores load balancing and autoscaling strategies on Kubernetes. Discover how to effectively integrate KServe platform for LLM deployment while maximizing GPU hardware efficiency in both public and private cloud environments. Explore critical performance concepts, including latency per token and tokens per second metrics, while gaining practical insights into leveraging KServe, Knative, and GPU operator features. Master cost-effective resource management strategies through detailed test results and analysis, enabling improved resource utilization for business-critical applications utilizing generative AI language models. Gain valuable knowledge about managing compute-intensive workloads and implementing efficient solutions for power usage optimization in Kubernetes environments.

Optimizing Load Balancing and Autoscaling for Large Language Model (LLM) Inference on Kubernetes

CNCF [Cloud Native Computing Foundation]
Add to list