Brag Your RAG with the MLOPS Swag - Madhav Sathe, Google & Jitender Kumar, publicissapient
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore a conference talk demonstrating how organizations can effectively implement Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) in business processes through a practical showcase of MLOps strategies. Learn through a live demonstration of building a RAG application stack using LangChain, Canopy, and PostgreSQL Vector database deployed on Kubernetes. Discover methods for optimizing computational performance with GPU and TPU accelerators while gaining valuable insights into essential MLOps components including data splitting, embeddings, retrieval, and prompt engineering. Master the techniques for addressing common enterprise challenges in GenAI implementation, from governance and continuous evaluation to scaling and cost management, all while maintaining efficient time-to-market delivery. Understand how to combine MLOps with Kubernetes to create scalable, business-critical GenAI solutions that deliver measurable value.
Brag Your RAG with MLOps - Building Scalable GenAI Solutions on Kubernetes