Represent structure as latent variable model (LVM)
8
Posterior distribution as features
9
Mean field algorithm aggregates information
10
What's embedding?
11
Learning via embedding
12
Embedding mean field
13
Directly parameterize nonlinear mapping
14
Embed belief propagation
15
New tools for algorithm design
16
Motivation 2: Dynamic processes over networks
17
Unroll: time-varying dependency structure
18
Embedding algorithm for building generative model
19
Scenario 3: Combinatorial optimization over graph
20
Greedy algorithm as Markov decision process
Description:
Explore embedding algorithms as a powerful tool for algorithm design in this lecture from the Simons Institute's Computational Challenges in Machine Learning series. Delve into prediction for structured data, combinatorial optimizations over graphs, and dynamic processes over networks. Learn about representing structures as latent variable models, using posterior distributions as features, and applying mean field algorithms for information aggregation. Discover the concept of embedding, its applications in learning, and how to embed belief propagation. Gain insights into new tools for algorithm design, including embedding mean field and directly parameterizing nonlinear mapping. Examine the process of unrolling time-varying dependency structures and building generative models through embedding algorithms. Finally, investigate how greedy algorithms can be viewed as Markov decision processes in combinatorial optimization over graphs.