Explore the concept of Grounded Visual Question Answering (VQA) in this 22-minute lecture from the University of Central Florida. Delve into the limitations of existing VQA systems and discover how grounded VQA systems aim to overcome these challenges. Learn about the problem setup, including the use of transformers with capsules, capsule-based tokens, and text-based residual connections. Examine pre-training tasks such as Masked Language Modeling (MLM) and Image Text Matching, along with the datasets used for pre-training. Investigate the fine-tuning process for downstream tasks and analyze qualitative comparisons using the GQA dataset. Review evaluation metrics and results before concluding with insights into future work in this rapidly evolving field of artificial intelligence and computer vision.
Visual Question Answering: Grounded Systems and Transformer Capsules