Side channel mitigation for the application code Hard to enforce in all code: ⚫ Compiler are allowed to add "side channel" when optimizing
16
Threat: n-day attacks
17
Defense: Plan for the worst
18
Transparency: reproducibility
19
Transparency: optimize for auditability
20
How do we protect ourselves?
Description:
Explore secure remote ML inference using Intel SGX enclaves in this 57-minute talk from the Confidential Computing Consortium. Delve into BlindAI, an open-source confidential computing solution that balances security, privacy, and performance in machine learning applications. Learn about the motivation behind BlindAI, its design considerations for Intel SGX specifics, and the results of an independent security audit. Discover how this solution protects model and user data confidentiality while ensuring prediction integrity. Examine topics such as on-device machine learning, homomorphic encryption, trusted computing bases, threat mitigation strategies, and transparency in reproducibility and auditability. Access accompanying slides and the BlindAI repository for further exploration, and join the Discord community for questions and discussions.
BlindAI: Secure Remote ML Inference with Intel SGX Enclaves