Главная
Study mode:
on
1
- What is RagBase?
2
- Text tutorial on MLExpert.io
3
- How RagBase works
4
- Project Structure
5
- UI with Streamlit
6
- Config
7
- File Upload
8
- Document Processing Ingestion
9
- Retrieval Reranker & LLMChainFilter
10
- QA Chain
11
- Chat Memory/History
12
- Create Models
13
- Start RagBase Locally
14
- Deploy to Streamlit Cloud
15
- Conclusion
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Discover how to build a local Retrieval-Augmented Generation (RAG) system for efficient document processing using Large Language Models (LLMs) in this comprehensive tutorial video. Learn to extract high-quality text from PDFs, split and format documents for optimal LLM performance, create vector stores with Qdrant, implement advanced retrieval techniques, and integrate local and remote LLMs. Follow along to develop a private chat application for your documents using LangChain and Streamlit, covering everything from project structure and UI design to document ingestion, retrieval methods, and deployment on Streamlit Cloud.

Local RAG with Llama 3.1 for PDFs - Private Chat with Documents using LangChain and Streamlit

Venelin Valkov
Add to list