Главная
Study mode:
on
1
Flan-20B-UL2 Launched
2
Loading the Model
3
Non 8Bit Inference
4
8Bit inference with CoT
5
Chain of Thought Prompting
6
Zeroshot Logical Reasoning
7
Zeroshot Generation
8
Zeroshot Story Writing
9
Zeroshot Common Sense Reasoning
10
Zeroshot Speech Writing
11
Testing a Large Token Span
12
Using the HuggingFace Inference API
Description:
Explore the capabilities of Google's latest publicly released Flan model, Flan-UL2 20 Billion, in this informative video tutorial. Learn how to run the model on a high-end Google Colab using the HuggingFace Library and 8-Bit inference. Discover the model's performance in various tasks, including chain of thought prompting, zero-shot logical reasoning, generation, story writing, common sense reasoning, and speech writing. Gain insights into loading the model, comparing non-8Bit and 8Bit inference, testing large token spans, and utilizing the HuggingFace Inference API. Follow along with the provided Colab notebook to experiment with this powerful language model firsthand.

Trying Out Flan 20B with UL2 - Working in Colab with 8-Bit Inference

Sam Witteveen
Add to list