Главная
Study mode:
on
1
intro
2
preamble
3
agenda
4
why foundation models?
5
generative ai can be used for a wide range of use cases
6
aws offers a broad choice of generative ai capabilities
7
limitations of llms
8
vector embeddings
9
vector databases
10
enabling vector search across aws services
11
amazon autota with postgresql compatibility
12
using pgvector in aws
13
amazon opensearch service
14
using opensearch in aws
15
amazon documentdb
16
amazon memorydb
17
amazon neptune analytics
18
amazon bedrock
19
knowledge bases for amazon bedrock
20
vector databases for amazon bedrock
21
retrieve and generate api
22
demo time
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the future of AWS-empowered RAG systems for Large Language Models in this conference talk from Conf42 LLMs 2024. Dive into the world of foundation models, generative AI use cases, and AWS's extensive generative AI capabilities. Discover the limitations of LLMs and learn about vector embeddings and databases. Gain insights into enabling vector search across AWS services, including Amazon Aurora, OpenSearch, DocumentDB, MemoryDB, and Neptune Analytics. Understand the power of Amazon Bedrock, its knowledge bases, and vector databases. Witness a live demonstration of the Retrieve and Generate API, showcasing practical applications of these cutting-edge technologies in action.

Vectoring Into The Future: AWS Empowered RAG Systems for LLMs

Conf42
Add to list