25:35 - Upper boundary for the quality of SX data?
9
37:15 - Context length
10
45:10 - What framework does Meta use for Llama
11
46:40 - Playing with smaller Llamas
12
51:20 - Multilingual capabilities
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Dive deep into the latest developments of LLaMA 3 in this informative video featuring Thomas Scialom from Meta. Explore key topics including synthetic data for pre/post training, privacy concerns, scaling and distillation techniques, and the decision not to use Mixture of Experts (MoE) architecture. Learn about the potential upper boundaries for synthetic data quality, context length improvements, Meta's framework choices, and the multilingual capabilities of smaller LLaMA models. Gain valuable insights into the cutting-edge advancements in large language models and their implications for AI research and development.
LLaMA 3 Deep Dive - Synthetic Data, Privacy, and Model Architecture