Главная
Study mode:
on
1
Intro
2
Paradigm 1 Writing Down
3
Paradigm 2 Example
4
Paradigm 3 Example
5
How are most language models built
6
Zeroshot learning
7
Alternative Paradigm
8
Multitask Prompted Training
9
NLP Data Sets
10
Writing Prompts
11
Pretrained Models
12
Paper
13
Results
14
Model Architecture
15
Experimental Results
16
Adaptation Results
17
Example in Context
18
Parameter Efficient Finetuning
19
Learning Facts
20
Pipeline
21
Sanity Check
22
Model Accuracy
23
Relevant Context
Description:
Explore the cutting-edge developments in language model construction through this 56-minute lecture by Colin Raffel from UNC and Huggingface. Delve into various paradigms of language model development, including writing down, example-based approaches, and alternative methods. Examine the concept of zero-shot learning and its implications. Investigate the innovative Multitask Prompted Training technique and its application to NLP datasets. Learn about writing effective prompts and leveraging pretrained models. Analyze experimental results, model architectures, and adaptation outcomes through real-world examples. Discover parameter-efficient fine-tuning techniques and methods for improving model accuracy. Gain insights into pipeline development, sanity checks, and the importance of relevant context in language modeling. This comprehensive talk, part of the CSCI 601.771: Self-Supervised Learning Course at Johns Hopkins University, offers valuable knowledge for researchers and practitioners in the field of natural language processing and machine learning. Read more

Building Better Language Models - Paradigms and Techniques

Center for Language & Speech Processing(CLSP), JHU
Add to list
0:00 / 0:00