Главная
Study mode:
on
1
[] Harcharan's preferred coffee
2
[] Takeaways
3
[] Against local LLMs
4
[] Creating bad habits
5
[] Operationalizing RAG from CICD perspective
6
[] Kubernetes vs LLM Deployment
7
[] Tool preferences in ML
8
[] DevOps perspective of deployment
9
[] Terraform Licensing Controversy
10
[] PR Review Template Guidance
11
[] People processes tech order
12
[] Register for the Data Engineering for AI/ML Conference now!
13
[] ML monitoring strategies explained
14
[] Serverless vs Overprovisioning
15
[] Model SLA's and Monitoring
16
[] LLM to App transition
17
[] Ensuring Robust Architecture
18
[] Chaos engineering in ML
19
[] Wrap up
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Dive into a comprehensive podcast episode exploring MLOps for Generative AI applications with Harcharan Kabbay, Lead Machine Learning Engineer at World Wide Technology. Gain insights into the Retrieval-Augmented Generation (RAG) framework and its integration with MLOps best practices. Learn about automating platform provisioning, application design principles, and the use of Kubernetes in AI systems. Discover strategies for reducing development time, enhancing security, and implementing effective monitoring in AI applications. Explore topics such as CI/CD pipelines, version control, and automated deployment processes for maintaining agility and efficiency in AI projects. Benefit from Kabbay's expertise in MLOps, DevOps, and automation as he shares valuable insights on building scalable, automated AI systems and integrating RAG-based applications into production environments.

MLOps for GenAI Applications - MLOps Podcast #256

MLOps.community
Add to list
0:00 / 0:00