Explore the intersection of vector databases and large language models in this 13-minute conference talk from the LLMs in Production Conference. Discover how vector embeddings and databases can provide context for generative models, enabling real-time updates to an external knowledge base. Learn about the advantages of this approach for handling constantly changing data and preventing model hallucinations. Gain insights into practical applications, such as creating an onboarding tool for company documentation. Follow along with examples, instructions, and open-source code to implement these techniques in your own projects. Understand the role of Redis as a vector database and online feature store in AI applications.
Vector Databases and Large Language Models for Context Retrieval - LLMs in Production