Главная
Study mode:
on
1
RAG Evaluation
2
Overview of LangChain RAG Agent
3
RAGAS Code Prerequisites
4
Agent Output for RAGAS
5
RAGAS Evaluation Format
6
RAGAS Metrics
7
Understanding RAGAS Metrics
8
Retrieval Metrics
9
RAGAS Context Recall
10
RAGAS Context Precision
11
Generation Metrics
12
RAGAS Faithfulness
13
RAGAS Answer Relevancy
14
Metrics Driven Development
Description:
Explore the RAGAS (RAG ASsessment) evaluation framework for RAG pipelines in this 20-minute video tutorial. Learn how to assess an AI agent built with LangChain, utilizing Anthropic's Claude 3, Cohere's embedding models, and the Pinecone vector database. Dive into the process of evaluating RAG systems, understanding RAGAS metrics, and implementing metrics-driven development. Gain insights into retrieval metrics like context recall and precision, as well as generation metrics such as faithfulness and answer relevancy. Access the accompanying code, article, and additional resources to enhance your understanding of RAG evaluation techniques.

AI Agent Evaluation with RAGAS Using LangChain, Claude 3, and Pinecone

James Briggs
Add to list