Главная
Study mode:
on
1
Introduction and overview
2
Importance of evaluation in LLM Applications
3
Frameworks: LlamaIndex vs LangChain
4
Weights & Biases in LLM Ops
5
What is RAG?
6
Components of a RAG pipeline
7
Demo time
8
Building the retriever and query engine
9
Integrating Weave and viewing Traces
10
Customized evaluation and comparison
11
Final thoughts and summary
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the intricacies of aligning LLM judges for improved evaluations in this comprehensive webinar. Delve into various evaluation strategies, focusing on LLM Judge alignment using a RAG pipeline as a case study. Learn to construct an effective evaluation system with LlamaIndex and harness W&B Weave for systematic assessment and annotation. Discover the importance of evaluation in LLM applications, compare frameworks like LlamaIndex and LangChain, and understand the role of Weights & Biases in LLM Ops. Gain insights into RAG technology, its pipeline components, and witness a live demonstration of building a retriever and query engine. Explore the integration of Weave, trace viewing, and customized evaluation techniques. Uncover best practices throughout the entire evaluation lifecycle in this hour-long session, concluding with final thoughts and a summary of key takeaways.

Aligning LLM Judges for Better Evaluations in RAG Pipelines

Weights & Biases
Add to list
0:00 / 0:00