Главная
Study mode:
on
1
Lecture starts
2
Best rank-k approximation recap
3
PCA vs. SVD
4
Random projection motivation
5
Johnson-Lindenstrauss lemma
6
Min k for which random projection are designed for
7
Random projection algorithm
8
Why does this work?
9
Compactly written version
10
Lecture ends
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Learn about random projection techniques in data mining through this 38-minute lecture that explores the relationship between Principal Component Analysis (PCA) and Singular Value Decomposition (SVD), before diving into random projection methods. Understand the mathematical foundations starting with best rank-k approximations, then progress to the Johnson-Lindenstrauss lemma and its implications for dimensionality reduction. Discover how to determine the minimum k value for random projections, examine the algorithm's implementation, and grasp the theoretical underpinnings that make this technique effective. Conclude with a concise mathematical formulation that ties all concepts together.

Random Projection in Data Mining - Spring 2023

UofU Data Science
Add to list
0:00 / 0:00