Главная
Study mode:
on
1
- Intro
2
- Lost in the Middle Challenge in Context
3
- Related work in Long Context LLMs
4
- Information Intensive Training IN2 Training
5
- Fine-grained Information awareness
6
- Integration and Reasoning of Information
7
- Mathematical Representation
8
- Trainnig setting/Details
9
- VArious Long Context Probing VAL Probing
10
- Needle in a Haystack for Long Context LLMs
11
- Experimental Results
12
- Quantitative Results
13
- Real-world data performance
14
- Summary and Extro
Description:
Explore a 14-minute video explanation of Microsoft's research paper on improving Large Language Models' context utilization through data-driven solutions, contrasting with Google's architectural approach in the infini-attention paper. Learn about the 'Lost in the Middle Challenge,' Information Intensive Training (IN2), and Various Long Context Probing (VAL) methodologies. Dive into mathematical representations, training settings, experimental results, and real-world performance data that demonstrate how LLMs can better process and utilize extended context. Presented by an experienced Machine Learning researcher with 15 years of software engineering background, the video breaks down complex concepts into digestible segments, complete with detailed timestamps for easy navigation through specific topics.

Making LLMs Fully Utilize Context - A Data-Driven Approach

AI Bites
Add to list
0:00 / 0:00