When calibration goes awry: hallucination in language models
Description:
Explore the phenomenon of hallucinations in language models through this insightful lecture by Adam Kalai from OpenAI. Delve into how calibration, a process naturally encouraged during pre-training, can lead to unexpected hallucinations. Examine the relationship between hallucination rates and domains using the Good-Turing estimator, with a particular focus on notorious sources like paper titles. Gain valuable insights into potential methods for mitigating hallucinations in AI language models. This hour-long talk, part of the Emerging Generalization Settings series at the Simons Institute, presents joint research with Santosh Vempala conducted while Kalai was at Microsoft Research New England.
When Calibration Goes Awry: Hallucination in Language Models