Главная
Study mode:
on
1
Intro
2
Imperfect data acquisition
3
Statistical computational gap
4
Prior art
5
A nonconvex least squares formulation
6
Gradient descent (GD) with random initialization?
7
A negative conjecture
8
Our proposal: a two-stage nonconvex algorithm
9
Rationale of two-stage approach
10
A bit more details about initialization
11
Assumptions
12
Numerical experiments
13
No need of sample splitting
14
Key proof ideas leave one-out decoupling
15
Distributional theory
16
Back to estimation
Description:
Explore the effectiveness of nonconvex optimization for noisy tensor completion in this 33-minute conference talk from the Tensor Methods and Emerging Applications to the Physical and Data Sciences 2021 workshop. Delve into Yuxin Chen's presentation on a two-stage nonconvex algorithm that addresses the high-volatility issue in sample-starved regimes, enabling linear convergence, minimal sample complexity, and minimax statistical accuracy. Learn about the characterization of the nonconvex estimator's distribution and its application in constructing entrywise confidence intervals for unseen tensor entries and unknown tensor factors. Gain insights into the role of statistical models in facilitating efficient and guaranteed nonconvex statistical learning, covering topics such as imperfect data acquisition, statistical computational gaps, gradient descent challenges, and key proof ideas like leave-one-out decoupling.

The Effectiveness of Nonconvex Tensor Completion - Fast Convergence and Uncertainty Quantification

Institute for Pure & Applied Mathematics (IPAM)
Add to list
0:00 / 0:00