Explore the cutting-edge developments in language model construction through this 56-minute lecture by Colin Raffel from UNC and Huggingface. Delve into various paradigms of language model development, including writing down, example-based approaches, and alternative methods. Examine the concept of zero-shot learning and its implications. Investigate the innovative Multitask Prompted Training technique and its application to NLP datasets. Learn about writing effective prompts and leveraging pretrained models. Analyze experimental results, model architectures, and adaptation outcomes through real-world examples. Discover parameter-efficient fine-tuning techniques and methods for improving model accuracy. Gain insights into pipeline development, sanity checks, and the importance of relevant context in language modeling. This comprehensive talk, part of the CSCI 601.771: Self-Supervised Learning Course at Johns Hopkins University, offers valuable knowledge for researchers and practitioners in the field of natural language processing and machine learning.
Read more
Building Better Language Models - Paradigms and Techniques
Center for Language & Speech Processing(CLSP), JHU