Главная
Study mode:
on
1
Continuing discussion around the recursive crawler
2
GitHub CoPilot, and the tasks it excels at
3
What do we do with the HTML we extract? How the seeder works
4
The different types of document splitters you can use
5
embedDocument and how it works
6
Why do we split documents when working with a vector database?
7
Problems that occur if you don’t split documents
8
Proper chunking improves relevance
9
You still need to tweak and experiment with your chunk parameters
10
Chunked upserts
11
Chat endpoint - how we use the context at runtime
12
Injecting context in LLMs prompts
13
Is there a measurable difference in where you put the context in the prompt?
14
Reviewing the end to end RAG workflow
15
LLMs conditioned us to be okay with responses taking time being pretty slow!
16
Cool UX anecdote around what humans consider too long
17
You have an opportunity to associate chunks with metadata
18
UI cards - selecting one to show it was used as context in response
19
How we make it visually clear which chunks and context were used in the LLM
20
Auditability and why it matters
21
Testing the live app
22
Outro chatting - Thursday AI sessions on Twitter spaces
23
Review GitHub project - this is all open-source!
24
Inaugural stream conclusion
25
Vim / VsCode / Cursor AI IDE discussion
26
Setting up Devtools on Mac OSX
27
Upcoming stream ideas - Image search / Pokemon search
Description:
Dive into the second part of a live code review exploring the Pinecone Vercel starter template and Retrieval Augmented Generation (RAG). Explore topics such as recursive crawlers, document splitting techniques, embedding processes, and the importance of proper chunking in vector databases. Learn about injecting context into LLM prompts, the end-to-end RAG workflow, and how to associate metadata with chunks. Discover UI design considerations for displaying context usage, the significance of auditability, and test a live application. Gain insights on GitHub CoPilot, IDE preferences, and setting up developer tools on Mac OSX. The video concludes with discussions on future stream ideas and AI sessions on Twitter spaces.

Pinecone Vercel Starter Template and RAG - Live Code Review Part 2

Pinecone
Add to list
0:00 / 0:00