Explore a conference talk on measuring and mitigating hallucinations in Retrieval-Augmented Generation (RAG) systems. Dive into the LLM revolution and its applications before addressing the critical issue of hallucinations. Learn about RAG as a solution, comparing DIY approaches with RAG-as-a-service options like Vectara. Discover the Hallucination Evaluation Model (HHEM) and see practical applications through sample projects like AskNews and Tax Chat. Gain insights into building more reliable AI applications that leverage the power of LLMs while minimizing false information.
Measuring Hallucinations in RAG - Retrieval Augmented Generation