Главная
Study mode:
on
1
Intro
2
Production Model Serving? How hard could it be?
3
KNative
4
KF Serving: Default and Canary Configurations
5
Supported Frameworks, Components and Storage Subsystems
6
Inference Service Control Plane
7
KFServing Deployment View
8
KF Serving Examples
9
Model Serving is accomplished. Can the predictions be trusted?
10
Production ML Architecture
11
Payload Logging Architecture Examples
12
Linux Foundation Al & Data
13
Trusted Al Lifecycle through Open Source
14
Al needs to explain its decisions!
15
Bias in Al: Criminal Justice System
16
Adversarial Robustness
17
Al Explainability 360
18
Al Fairness 360
19
LFAI Trusted Al Projects with Kubeflow Serving
20
Demo Flow
Description:
Explore the intricacies of implementing trusted AI through machine learning payload logging on Kubernetes in this conference talk by Tommy Li and Andrew Butler from IBM. Delve into the challenges of production model serving, examining KNative and KF Serving configurations, supported frameworks, and storage subsystems. Investigate the inference service control plane and KFServing deployment views with practical examples. Address the critical question of prediction trustworthiness in production ML architectures, focusing on payload logging. Discover the Linux Foundation's approach to the Trusted AI lifecycle through open source initiatives. Examine the importance of AI explainability, bias detection in criminal justice systems, and adversarial robustness. Learn about AI Explainability 360 and AI Fairness 360 tools, and explore LFAI Trusted AI projects integrated with Kubeflow Serving. Conclude with a demonstration that ties together the concepts presented.

Infusing Trusted AI Using Machine Learning Payload Logging on Kubernetes

Linux Foundation
Add to list