Главная
Study mode:
on
1
Intro
2
Open questions, current trends, limits
3
Model size and Computational efficiency
4
Using more and more data
5
Pretraining on more data
6
Fine-tuning on more data
7
More data or better models
8
In-domain vs. out-of-domain generalization
9
The limits of NLU and the rise of NLG
10
Solutions to the lack of robustness
11
Reporting and evaluation issues
12
The inductive bias question
13
The common sense question
Description:
Explore the future of Natural Language Processing in this comprehensive 1-hour lecture by Thomas Wolf, Science lead at HuggingFace. Delve into transfer learning, examining open questions, current trends, limits, and future directions. Gain insights from a curated selection of late 2019/early-2020 research papers focusing on model size and computational efficiency, out-of-domain generalization, model evaluation, fine-tuning, sample efficiency, common sense, and inductive biases. Analyze the impact of increasing data and model sizes, compare in-domain vs. out-of-domain generalization, and investigate solutions to robustness issues in NLP. Discuss the rise of Natural Language Generation (NLG) and its implications for the field. Address critical questions surrounding inductive bias and common sense in AI language models. Access accompanying slides for visual support and follow HuggingFace and Thomas Wolf on Twitter for ongoing updates in the rapidly evolving world of NLP.

The Future of Natural Language Processing

HuggingFace
Add to list
0:00 / 0:00