Главная
Study mode:
on
1
Introduction
2
Challenges in Neural Networks
3
Robustness of Neural Networks
4
Outline
5
Conceptual Challenge
6
Computational Challenge
7
Goal
8
Background Knowledge
9
Compression Techniques
10
Compression Methods
11
CP Layer
12
Low Rankedness
13
Reshaping
14
Generalization Error Bound
15
Performance
16
Evaluation
17
Interpreting transformers
18
Operations in tensor diagrams
19
Benefits of tensor diagrams
20
Single Hat SelfAttention
21
Multi Hat SelfAttention
22
Multi Hat Modes
23
Recap
24
Improved expressive power
25
Tensor representation for robust learning
26
Results
27
Summary
Description:
Explore a comprehensive lecture on leveraging tensor representations to understand, interpret, and design neural network models. Delve into the challenges of modern deep neural networks and learn how spectral methods using tensor decompositions can provide provable performance guarantees. Discover techniques for designing deep neural network architectures that ensure interpretability, expressive power, generalization, and robustness before training begins. Examine the use of spectral methods to create "desirable" deep model functions and guarantee optimal outcomes post-training. Investigate compression techniques, CP layers, and low-rankedness concepts. Analyze generalization error bounds and performance evaluations. Gain insights into interpreting transformers through tensor diagrams, exploring single and multi-hat self-attention mechanisms. Conclude with an overview of improved expressive power and tensor representation for robust learning, providing a comprehensive understanding of advanced neural network design and analysis techniques. Read more

Understanding, Interpreting and Designing Neural Network Models Through Tensor Representations

Institute for Pure & Applied Mathematics (IPAM)
Add to list