star: self-taught reasoner bootstrapping reasoning with reasoning
12
specializing smaller language models towards multi-step reasoning
13
distilling step-by-step
14
recursive and iterative prompting
15
least-to-most prompting
16
plan, eliminate, and track
17
describe, explain, plan and select
18
tool usage
19
react: reason and act
20
chameleon
21
acknowledgement & further reading
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore the intricacies of reasoning in large language models through this comprehensive conference talk. Delve into various techniques for eliciting and measuring reasoning abilities, including chain of thought prompting, program-aided language models, and plan-and-solve prompting. Discover innovative approaches like self-taught reasoners, specializing smaller models for multi-step reasoning, and iterative prompting methods. Learn about advanced concepts such as tool usage, the REACT framework, and the Chameleon model. Gain valuable insights into the current state and future potential of reasoning capabilities in AI language models, with practical examples and further reading recommendations provided.
Unlocking Reasoning in Large Language Models - Conf42 ML 2023