- Main experimental results & high standard deviations
4
- Why is there no clear winner?
5
- Why are bigger models not a lot better?
6
- What’s behind the name ChibiT?
7
- Why is iGPT underperforming?
8
- How are tokens distributed in Reinforcement Learning?
9
- What other domains could have good properties to transfer?
10
- A deeper dive into the models' attention patterns
11
- Codebase, model sizes, and compute requirements
12
- Scaling behavior of pre-trained models
13
- What did not work out in this project?
14
- How can people get started and where to go next?
Description:
Explore an in-depth interview with authors Machel Reid and Yutaro Yamada discussing their research on leveraging pre-trained language models for offline reinforcement learning. Delve into the experimental results, challenges, and insights gained from applying Wikipedia-trained models to control and game environments. Learn about the potential of transferring knowledge between generative modeling tasks across different domains, the impact on convergence speed and performance, and the implications for future research in reinforcement learning and sequence modeling. Gain valuable perspectives on model architectures, attention patterns, computational requirements, and practical advice for getting started in this emerging field.
Can Wikipedia Help Offline Reinforcement Learning - Author Interview