Главная
Study mode:
on
1
Reminders
2
Recap of the uncertainty 1st part
3
Temperature scaling
4
Bayesian approaches to calibration
5
Free-text explanations / chain-of-thoughts intro
6
Prompt-based finetuning
7
In-context learning ICL
8
Reliable ICL
9
Chain-of-thought prompting
10
FLAN-T5
11
LLaMA Chat
Description:
Learn about advanced concepts in AI uncertainty quantification and prompting techniques in this comprehensive lecture. Explore temperature scaling methods and Bayesian approaches to calibration before diving into free-text explanations and chain-of-thought prompting. Master in-context learning (ICL) principles and their reliable implementation, while understanding prompt-based fine-tuning strategies. Examine practical applications through case studies of FLAN-T5 and LLaMA Chat models. Gain insights into how these techniques improve AI model performance and reliability through detailed explanations and real-world examples.

Uncertainty, Prompting, and Chain-of-Thoughts in Large Language Models - Part 2

UofU Data Science
Add to list