Explore the transformative impact of large language models (LLMs) on computer vision in this 40-minute talk by Jacob Marks. Gain insights into key LLM-centered projects like VisProg, ViperGPT, VoxelGPT, and HuggingGPT that are revolutionizing the field. Learn about the challenges and lessons from building VoxelGPT, and discover practical tips for domain-specific prompt engineering. Understand how text-only LLMs can achieve remarkable success in visual tasks through prompting and tool use. Delve into topics such as unimodal and multimodal tasks, bridging the modality gap, and the role of FiftyOne in building bridges between modalities. Explore the potential of agents to acquire skills and the ongoing role of humans in this evolving landscape. Conclude with a forward-looking perspective on the future of LLMs in computer vision.
Computer Vision and Language Models: Bridging the Modality Gap