Explore a groundbreaking approach to tensor compilation for deep learning in this 15-minute conference talk from OSDI '22. Delve into ROLLER, a novel system that dramatically reduces kernel generation time from hours to seconds while maintaining competitive performance. Learn about the innovative rTile abstraction and recursive construction algorithm that enable efficient execution across various accelerators, including GPUs and IPUs. Discover how ROLLER's white-box solution addresses the challenges of excessive compilation times and large search spaces in existing DNN compilers. Examine the system's performance on different hardware, its ability to handle small and irregular shapes, and its impact on improving pipeline throughput. Gain insights into the future of deep learning development cycles and custom kernel creation for new hardware vendors.
ROLLER - Fast and Efficient Tensor Compilation for Deep Learning