Главная
Study mode:
on
1
Intro
2
complex networks are ubiquitous
3
Working with incomplete data can skew analyses
4
The network discovery question
5
A more accurate representation
6
Some issues that make the problem difficult
7
Selective harvesting via reinforcement learning
8
Policy function
9
Modeling future reward: Return function
10
Value function
11
What are current approaches missing?
12
State space representation
13
Map network states into canonical representations
14
Training set generation for offline learning
15
Episodic training
16
The learning objective
17
Our model: Network Actor Critic (NAC)
18
Experiments: Baselines & competitors
19
Experiments: Results on real data
20
Which graph embedding to choice?
21
Wrap-up: Network Actor-Critic (NAC)
22
Control of pandemics
23
Problem and high-level overview of our system: COANET
Description:
Explore task-driven network discovery through deep reinforcement learning in this 28-minute lecture from the Deep Learning and Combinatorial Optimization 2021 conference. Delve into the challenges of working with incomplete network data and learn how to improve network observability for more accurate analyses. Discover the Network Actor Critic (NAC) framework, which utilizes task-specific network embeddings to reduce state space complexity and learn offline policies for network discovery. Examine the performance of NAC compared to competitive online-discovery algorithms and understand the importance of planning in addressing sparse and changing reward signals. Gain insights into selective harvesting, policy functions, and modeling future rewards in network analysis. Explore real-world applications, including the control of pandemics, and get an overview of the COANET system for tackling complex network problems.

Task-Driven Network Discovery via Deep Reinforcement Learning on Embedded Spaces

Institute for Pure & Applied Mathematics (IPAM)
Add to list