Главная
Study mode:
on
1
Introduction
2
Limitations of machine learning
3
Systematization
4
Learning theory
5
Conscious processing
6
Agency
7
System 1 vs System 2
8
The Kind of Knowledge
9
Knowledge Representation
10
Attention
11
Recurrent Independent Mechanisms
12
Global Workspace Theory
13
Attention Mechanisms
14
Causality
15
Independent Mechanism Hypothesis
16
Localized Changes
17
Parameterization
18
Multitask learning
19
Modular recurrent net
Description:
Explore the frontiers of machine learning in this seminar on Theoretical Machine Learning, featuring renowned researcher Yoshua Bengio from Université de Montréal. Delve into the concept of priors for semantic variables and examine the limitations of current machine learning approaches. Gain insights into systematization, learning theory, and the role of conscious processing in AI. Investigate the differences between System 1 and System 2 thinking, and explore knowledge representation, attention mechanisms, and the Global Workspace Theory. Discover the importance of causality, the Independent Mechanism Hypothesis, and localized changes in AI systems. Learn about parameterization, multitask learning, and modular recurrent networks as Bengio shares cutting-edge ideas on advancing machine learning capabilities.

Priors for Semantic Variables - Yoshua Bengio

Institute for Advanced Study
Add to list
0:00 / 0:00