Главная
Study mode:
on
1
Intro
2
The Concerns of Opaque Deep Learning Model
3
Existing Explanation Techniques & Limitations
4
One Example of Model Explanation (LIME. KDD'16)
5
Limitation of Existing Explanation Techniques
6
LEMNA: Local Explanation Method using Nonlinear Approximation
7
Supporting Locally Non-linear Decision Boundaries
8
Modeling the Feature Dependency . Mature regression model with fused lasso
9
Deriving an Explanation from DNN with LEMNA
10
Explanation Accuracy Evaluation
11
Demonstration of LEMNA in Identifying Binary Function Start
12
Building Trust in the Target Models
13
Troubleshooting and Patching Model Errors
Description:
Explore a 21-minute conference talk that delves into LEMNA, a novel approach for explaining deep learning-based security applications. Learn about the challenges of opaque deep learning models in security-critical domains and the limitations of existing explanation techniques. Discover how LEMNA addresses these issues by supporting locally non-linear decision boundaries and modeling feature dependency. Gain insights into deriving explanations from deep neural networks, evaluating explanation accuracy, and practical applications such as identifying binary function starts. Understand how LEMNA contributes to building trust in target models and aids in troubleshooting and patching model errors, ultimately enhancing the transparency and reliability of deep learning in security contexts.

LEMNA - Explaining Deep Learning Based Security Applications

Association for Computing Machinery (ACM)
Add to list
0:00 / 0:00