Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore operational excellence for Large Language Models (LLMs) using Amazon Bedrock in this conference talk by Suraj Muraleedharan at Conf42 LLMs 2024. Delve into the design principles of operational excellence, compare DevOps, MLOps, and LLMOps, and understand the model lifecycle. Learn about LLM selection criteria, customization options, and architectural patterns for Amazon Bedrock. Discover how to implement knowledge bases, privately customize models, and integrate Amazon Bedrock API with various AWS services. Gain insights on invocation logging, metrics, model evaluation, and implementing guardrails for generative applications. Examine real-world examples and best practices for achieving operational excellence with Amazon Bedrock in LLM deployments.
Operational Excellence for LLMs Using Amazon Bedrock - Conf42 LLMs 2024