Главная
Study mode:
on
1
Intro
2
Inverse Problems in Imaging
3
ML Methods for MR Reconstruction
4
Key Observations & Current Challenges
5
Motivation Can we significantly reduce the large paired training dataset requirement for
6
Self-Training in Natural Language Processing
7
Self-Training for MRI Reconstruction
8
Untrained Neural Networks (Deep Image Prior)
9
Untrained Neural Networks (ConvDecoder)
10
Key Observations & Ongoing Work
11
We know how to simulate motion
12
Standardization of ML pipelines matter
13
Self-supervised learning methods trained in-domain can learn good image-level representations for MR images
Description:
Explore advanced techniques for MR reconstruction in this 47-minute Stanford University lecture by PhD student Beliz Gunel. Dive into the use of untrained neural networks as image priors for solving inverse problems, and discover how ConvDecoder can generate weakly-labeled data from undersampled MR scans. Learn about a novel approach combining few supervised pairs with weakly supervised pairs to train an unrolled neural network, achieving strong reconstruction performance with fast inference time. Compare this method to supervised and self-training baselines in limited data scenarios, and gain insights into applying self-training in natural language understanding. Examine key observations, current challenges, and ongoing work in MR reconstruction, including motion simulation and the importance of standardizing ML pipelines.

Self-Training - Weak Supervision Using Untrained Neural Nets for MR Reconstruction - Beliz Gunel

Stanford University
Add to list
0:00 / 0:00