Главная
Study mode:
on
1
- Tutorial Introduction
2
- Why LIME is needed?
3
- Need for a surrogate model
4
- LIME Properties
5
- LIME is not Feature Importance
6
- Explaining image classification
7
- Another LIME based explanation
8
- Tabular data classification explanation
9
- Two types of explanations
10
- What is in notebook exercises?
11
- 1st Original LIME explanation
12
- Loading Inception V3 model
13
- LIME library Installation
14
- Lime Explainer Module
15
- LIME Explanation Model Creation
16
- Creating superpixel Image
17
- Showing Pros and Cons in image
18
- Showing Pros and Cons with weight higher 0.1 in image
19
- Analyzing 2nd Prediction
20
- LIME Custom Implementation
21
- Loading EffecientNet Model
22
- Loading LIME class from custom Implementation
23
- LIME Explanation Results
24
- Loading ResNet50 Model
25
- LIME Explanations
26
- Step by Step Custom Explanations
27
- Explanations Comparisons
28
- Saving Notebooks to GitHub
29
- Recap
Description:
Explore LIME (Local Interpretable Model-agnostic Explanations) for explaining, trusting, and validating predictions from any machine learning model in this hands-on tutorial. Learn to implement LIME in your ML pipeline through two Jupyter notebooks: one demonstrating LIME explanations with Inception V3 image classification, and another showcasing custom LIME implementation. Discover how to create model explanations for supervised predictions, compare different models, and gain insights into the decision-making process of various algorithms. Dive into topics such as surrogate models, LIME properties, image and tabular data classification explanations, and step-by-step custom explanations using popular models like Inception V3, EfficientNet, and ResNet50.

Apply LIME to Explain, Trust, and Validate Your Predictions for Any ML Model

Prodramp
Add to list
0:00 / 0:00