Главная
Study mode:
on
1
Introduction
2
The Hype
3
Example A Google Translate
4
Selfdriving cars
5
Computers are stupid
6
Adversarial examples
7
Biometrics
8
Malicious intent
9
Bias
10
Privacy
11
Linear regression
12
How to solve this problem
13
Access to API
14
Homomorphic Encryption
15
Interdisciplinary Panels
16
Paper Distribution
Description:
Explore the critical aspects of AI security and ethics in this thought-provoking conference talk from GOTO Berlin 2018. Delve into the field of adversarial learning, examining how easily artificial intelligence can be fooled and the challenges in creating robust, secure neural networks. Investigate the potential risks machine learning poses to data privacy and ethical data use, including the implications of GDPR. Learn about the hype surrounding AI and its real-world applications, from Google Translate to self-driving cars. Discover why computers can be considered "stupid" and how this impacts AI development. Examine adversarial examples in biometrics and the potential for malicious intent. Address issues of bias in AI systems and privacy concerns in linear regression models. Explore potential solutions, including API access control and homomorphic encryption. Consider the importance of interdisciplinary panels and paper distribution in advancing AI security and ethics. Gain valuable insights into the complexities of AI development and the ongoing efforts to protect it from vulnerabilities and misuse. Read more

Computers are Stupid - Protecting "AI" from Itself

GOTO Conferences
Add to list
0:00 / 0:00