Главная
Study mode:
on
1
Overview
2
A new type of learning
3
Quantizing a feed-forward neural network
4
Unitary quantum neural network
5
Two components of trainings
6
Barren plateaus in QNNS
7
The big tradeoff in OML
8
Circumventing barren plateau
9
What does classical ML do?
10
Extended swap test
11
Learning thermal states
12
Generative algorithm to thermal state learning
13
Gradients for thermal state learning
14
Shallow algorithm
15
FT algorithm
16
Avoiding poor initializations
Description:
Explore a groundbreaking approach to training quantum neural networks (QNNs) in this 32-minute lecture by Maria Kieferova from the University of Technology Sydney. Delve into the challenges of barren plateaus in QNN development and discover how an unbounded loss function can overcome existing limitations. Learn about a novel training algorithm that minimizes maximal Renyi divergence and techniques for gradient computation. Examine closed-form gradients for Unitary QNNs and Quantum Boltzmann Machines, and understand the conditions for avoiding barren plateaus. Witness practical applications in thermal state learning and Hamiltonian learning, with numerical experiments demonstrating rapid convergence and high fidelity results. Gain insights into quantizing feed-forward neural networks, the extended swap test, and strategies for avoiding poor initializations in this comprehensive exploration of cutting-edge quantum machine learning techniques.

Training Quantum Neural Networks with an Unbounded Loss Function - IPAM at UCLA

Institute for Pure & Applied Mathematics (IPAM)
Add to list
0:00 / 0:00