Главная
Study mode:
on
1
Really long context length large language models
2
Video overview
3
Yi 200k context model
4
Which long context models are actually good?
5
Base model vs Chat fine-tuned Yi model
6
Passkey Retrieval on Yi 6B
7
Passkey retrieval for Claude
8
Yi 34B performance at 107k context
9
Inferencing Yi models with Runpod
10
Yi Function-calling models
11
Long Context Model Resources
12
Video Summary
Description:
Explore the capabilities of long context large language models in this 25-minute video from Trelis Research. Learn about the Yi 34B 200k model, compare different long context models, and understand the differences between base and chat fine-tuned versions. Discover how to perform passkey retrieval tasks using Yi 6B and Claude models. Examine the Yi 34B model's performance with 107k context length. Get hands-on guidance for inferencing Yi models using Runpod, and learn about Yi function-calling models. Access comprehensive resources for long context models and gain valuable insights into the latest advancements in LLM technology.

Really Long Context LLMs - 200k Input Tokens

Trelis Research
Add to list
0:00 / 0:00