Главная
Study mode:
on
1
Intro
2
Visual Question Answering
3
Task Breakdown
4
Architecture Overview
5
Question Parsing
6
Program Execution
7
Training
8
Quantitative results on CLEVR
9
CLEVR-Humans & Results
10
New Scenes: Minecraft
11
Summary
Description:
Explore the innovative approach to Visual Question Answering (VQA) that disentangles reasoning from vision and language understanding in this 27-minute lecture from the University of Central Florida. Delve into the task breakdown, architecture overview, and key components such as question parsing and program execution. Examine quantitative results on CLEVR and CLEVR-Humans datasets, and discover how this neural-symbolic method extends to new scenes like Minecraft. Gain insights into the future of AI systems that can effectively combine reasoning with visual and linguistic comprehension.

Neural-Symbolic VQA - Disentangling Reasoning from Vision and Language Understanding

University of Central Florida
Add to list
0:00 / 0:00