Главная
Study mode:
on
1
LLM fail in logic reasoning
2
Symbolic Code representation
3
Symbolic perturbations
4
20 percent LLM accuracy
5
My Logic Test Symbolic encoded
6
Prolog Code for logic test
7
LISP, Haskell, CLIPS, SCALA code
8
NEW Reasoning power for AI Systems
9
AI Agent Reasoning enhanced
10
ReasonAgain paper Microsoft AMD
11
Limitations
12
No Prompt Engineering required
Description:
Learn about enhancing AI agent reasoning capabilities through symbolic programming in this 25-minute technical video. Explore the ReasonAgain methodology, which improves mathematical reasoning evaluation in Large Language Models (LLMs) by implementing Python-based symbolic programs. Discover how parameter perturbations generate new input-output pairs to test LLMs' reasoning consistency, revealing performance limitations and fragilities not captured by traditional evaluation metrics. Follow along through detailed code examples in multiple programming languages including Prolog, LISP, Haskell, CLIPS, and SCALA, while understanding key concepts like symbolic code representation, perturbation techniques, and their impact on AI reasoning. Examine real-world applications, limitations, and implementation strategies that require no prompt engineering, backed by research from Microsoft and AMD. The presentation covers critical findings showing only 20% LLM accuracy in complex reasoning tasks and provides solutions for enhancing AI agent reasoning capabilities. Read more

Perfect Reasoning for AI Agents Using ReasonAgain - Symbolic Code Implementation

Discover AI
Add to list
0:00 / 0:00