Главная
Study mode:
on
1
Introduction
2
HighParameter Optimization
3
Automatic Model Selection
4
Program Induction
5
Design Considerations
6
Proof
7
Framework
8
Optimization Based Mental Learning
9
Objective Functions
10
Preventing Overfitting
11
Forward Dynamics
12
Uncertainty
13
Forward Dynamic Stochastic
14
Original Formulation
15
New Formulation
16
Expectation
17
Setting
18
Example
19
Quick Question
20
Update Formula
21
NeuralNets
22
Experiments
23
Parameters
24
Intervals
25
Gradients
26
Illustration
27
Outputs
28
empirical learning
29
correct yourself
30
speed up
Description:
Explore the challenges and potential solutions in meta-learning through this comprehensive seminar on theoretical machine learning. Delve into high-parameter optimization, automatic model selection, and program induction as Ke Li, a Member of the School of Mathematics at the Institute for Advanced Study, presents "Meta-Learning: Why It's Hard and What We Can Do." Gain insights into design considerations, proof frameworks, and optimization-based mental learning. Examine objective functions, methods for preventing overfitting, and forward dynamics in both deterministic and stochastic settings. Understand the original and new formulations of meta-learning problems, with practical examples and illustrations. Investigate the role of neural networks, parameter intervals, and gradients in meta-learning experiments. Learn about empirical learning techniques and strategies to improve and accelerate the learning process.

Meta-Learning - Why It’s Hard and What We Can Do - Ke Li

Institute for Advanced Study
Add to list