Главная
Study mode:
on
1
Introduction
2
AI has a great potential
3
Are AI riskfree
4
Ethical Problems
5
Explainable AI
6
Security and privacy vulnerabilities
7
Trustworthy AI
8
Linux Validation AI
9
How to Achieve Trustworthy AI
10
Exercise
11
Getting familiar with the ecosystem
12
Project categories
13
Task definition
14
Task explanation
15
Robustness adversary evaluation
16
Notebook
17
Neural Network
18
Results
19
adversarial example
Description:
Explore the concept of Trustworthy AI and its implementation using open source tools in this informative conference talk. Delve into the rapid adoption of AI in various aspects of life and the growing need for mature, trustworthy AI systems. Examine the efforts of research communities and governmental bodies in defining guidelines and principles for Responsible AI, Ethical AI, and Trustworthy AI. Learn about the Trusted AI ecosystem and witness demonstrations of existing open source projects, including Adversarial Robustness Toolbox, AI Fairness, and AI Explainability. Discover how to leverage these tools to incorporate accountability into your machine learning lifecycle and enhance the trustworthiness of your AI systems. Gain insights into AI's potential, ethical considerations, explainable AI, security vulnerabilities, and the Linux Validation AI framework. Participate in hands-on exercises to familiarize yourself with the ecosystem, project categories, and task definitions. Explore neural networks, adversarial examples, and robustness evaluation techniques to build more reliable and ethical AI solutions. Read more

How to Build Trustworthy AI with Open Source

Linux Foundation
Add to list
00:00
-31:37