Главная
Study mode:
on
1
Intro
2
Big Picture
3
Labels
4
Why do we use HW?
5
Why do we care about imbalanced data?
6
What to do?
7
Random under sampling
8
Random oversampling with replacement
9
Experiments
10
Dataset 1
11
Dataset 2
12
Data sampling techniques
13
Further results
14
Evaluation metrics
15
SR/GE vs acc
16
Take away
Description:
Explore the challenges of class imbalance and conflicting metrics in machine learning for side-channel evaluation in this 21-minute conference talk presented at the Cryptographic Hardware and Embedded Systems Conference 2019. Delve into the importance of Hamming Weight (HW) and the impact of imbalanced data on machine learning models. Learn about various data sampling techniques, including random under sampling and random oversampling with replacement. Examine experimental results from two datasets and understand the implications of different evaluation metrics, particularly the relationship between Success Rate/Guessing Entropy (SR/GE) and accuracy. Gain valuable insights and takeaways for improving machine learning approaches in side-channel analysis.

The Curse of Class Imbalance and Conflicting Metrics with Machine Learning for Side-channel Evaluation

TheIACR
Add to list