Главная
Study mode:
on
1
Re-thinking Transformers: Searching for Efficient Linear Layers over a Continuous Space of...
Description:
Explore the cutting-edge research on efficient alternatives to dense linear layers in large neural networks through this 42-minute lecture by Andrew Gordon Wilson from New York University. Delve into a unifying framework that enables searching among all linear operators expressible via Einstein summation, encompassing previously proposed structures and introducing novel ones. Examine the developed taxonomy of operators based on computational and algebraic properties, gaining insights into their scaling laws. Discover the subset of structures that outperform dense layers in terms of training compute efficiency. Learn about the natural extension of these structures into sparse mixture-of-experts layers, significantly improving compute-optimal training efficiency for large language models. Gain valuable knowledge on the future of Transformers and efficient linear layers in the field of machine learning and artificial intelligence.

Re-thinking Transformers: Searching for Efficient Linear Layers over a Continuous Space

Simons Institute
Add to list
0:00 / 0:00