Explore a Stanford University seminar on petascale deep learning processor architecture, featuring Tapabrata Ghosh from Vathys.ai. Delve into innovative strategies for reducing data movement and improving efficiency in deep learning processors, including circuit-level innovations and comparisons with designs like Google's TPU. Learn about Vathys' approach to addressing computational bottlenecks, scalability for next-generation DL models, and the speaker's background in developing performant and power-efficient deep learning processors. Gain insights into topics such as Moore's Law, obstacles in chip design, architectural solutions, new memory technologies, wireless links, thermal management, and comparisons with other approaches like TPU v1, TPU v2, and analog designs.