Главная
Study mode:
on
1
Intro
2
PRACTICAL CONSIDERATIONS FOR MACHINE LEARNING
3
CHALLENGES IN DEPLOYING LARGE-SCALE LEARNING
4
DECLARATIVE PROGRAMMING
5
MXNET: MIXED PROGRAMMING PARADIGM
6
WRITING PARALLEL PROGRAMS IS HARD
7
HIERARCHICAL PARAMETER SERVER IN MXNET
8
TENSORS, DEEP LEARNING & MXNET
9
TENSOR CONTRACTION AS A LAYER
10
Introducing Amazon Al
11
Rekognition: Object & Scene Detection
12
Rekognition: Facial Analysis
13
Polly: A Focus On Voice Quality & Pronunciation
Description:
Explore efficient distributed deep learning techniques using MXNet in this 45-minute lecture by Anima Anandkumar from UC Irvine. Delve into practical considerations for machine learning, challenges in deploying large-scale learning, and declarative programming. Discover MXNet's mixed programming paradigm and hierarchical parameter server. Examine tensor contraction as a layer and learn about Amazon AI services like Rekognition for object, scene, and facial analysis, as well as Polly for voice quality and pronunciation. Gain insights into computational challenges in machine learning and strategies for writing parallel programs in this comprehensive talk from the Simons Institute's Computational Challenges in Machine Learning series.

Efficient Distributed Deep Learning Using MXNet

Simons Institute
Add to list
0:00 / 0:00