Главная
Study mode:
on
1
Intro
2
What is CLIP?
3
Getting started
4
Creating text embeddings
5
Creating image embeddings
6
Embedding a lot of images
7
Text-image similarity search
8
Alternative image and text search
Description:
Explore OpenAI's CLIP, a multi-modal model capable of understanding relationships between text and images, in this 23-minute tutorial. Learn how to use CLIP via the Hugging Face library to create text and image embeddings, perform text-image similarity searches, and explore alternative image and text search methods. Gain practical insights into multi-modal machine learning and discover the power of CLIP in bridging the gap between textual and visual data processing.

Intro to Multi-Modal ML with OpenAI's CLIP

James Briggs
Add to list