Главная
Study mode:
on
1
intro
2
preamble
3
magic of llms
4
- like apis we know and love
5
- even more unpredicability
6
- how do we define "correct"?
7
about
8
- what's in the box
9
- endless feedback loops
10
why believe me?
11
- timeline
12
- goals
13
laws of building on llms
14
how do we go forward? instrumentation
15
instrumentation for llms
16
emerging behaviors
17
a truth for llms
18
service level objectives
19
slos: a quick definition
20
slos for developing with llms
21
from others in the wild
22
duolingo
23
intercom
24
so in the end:
25
thanks!
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the intersection of Site Reliability Engineering (SRE) and observability in the context of Large Language Models (LLMs) in this conference talk from Conf42 Incident Management 2023. Delve into the unique challenges and opportunities presented by LLMs, comparing them to traditional APIs while highlighting their increased unpredictability. Examine the concept of observability and its application to LLM-based systems, including instrumentation techniques and emerging behaviors. Learn about implementing Service Level Objectives (SLOs) for LLM development and gain insights from real-world examples such as Duolingo and Intercom. Discover practical strategies for leveraging SRE principles to build more reliable and observable LLM-powered applications.

Leveraging SRE and Observability for Building on LLMs

Conf42
Add to list
0:00 / 0:00