Implementing Deep Learning using Neural Networks Outputs
6
Inputs and Outputs in a Neural Network
7
Hidden Layer(s)
8
Weights and Biases
9
Calculating the Result of a Node (Forward Propagation)
10
Feeding the Result of a Node to an Activation Function
11
Categories of Activation Functions
12
Binary Step Function
13
Analogy
14
Use of Sigmoid Activation
15
Non-Linear Activation
16
Evaluating Performance
17
Cross Entropy
18
In Summary Activation Function and Loss Function
19
Using an Optimizer
20
Back Propagation
21
A walkthrough
22
Initializing the Weights
23
Significance of the Partial Differentials
24
Updating the Weights using Stochastic Gradient Descent
25
In Summary Activation Function, Optimizer, and Loss Function
26
TensorFlow and Keras
Description:
Explore the fundamentals of Deep Learning in this comprehensive 58-minute conference talk from NDC Conferences. Gain a clear understanding of key concepts such as back-propagation, gradient descent, loss functions, and optimizers. Learn how to train a deep learning model for object recognition through a practical example. Discover the inner workings of neural networks, including hidden layers, weights, biases, and activation functions. Understand the importance of evaluating performance using cross-entropy and the role of optimizers in back-propagation. Delve into the process of initializing weights, updating them using Stochastic Gradient Descent, and the significance of partial differentials. Get introduced to TensorFlow and Keras as tools for implementing deep learning models. Perfect for those looking to demystify AI and machine learning concepts, this talk provides a solid foundation for beginners and a refresher for experienced practitioners in the field of Deep Learning.
What You Always Wanted to Know About Deep Learning, but Were Afraid to Ask