Главная
Study mode:
on
1
Logically Securing the Illogically Logical Use of Large Language Models - Sarah Evans & Jay White
Description:
Explore the critical intersection of security and Large Language Models (LLMs) in this 43-minute conference talk presented by Sarah Evans from Dell Technologies and Jay White from Microsoft at a Linux Foundation event. Delve into the potential security risks associated with emerging technologies like LLMs, focusing on a specific scenario of downloading a model from Hugging Face and applying it to internal datasets. Gain insights into applying established risk management frameworks such as NIST 800-53 (rev 5) and the emerging AI RMF 1.0 to LLM development and adoption. Learn about key risk control families including access control, incident response, configuration management, and supply chain risk management. Discover how to bridge the gap between traditional security fundamentals and LLM development, enabling more secure design and efficient enterprise implementation. Walk away with practical knowledge on pre-emptive risk management measures that can be directly applied to LLM projects, ensuring a more secure and robust development process. Read more

Logically Securing the Illogically Logical Use of Large Language Models

Linux Foundation
Add to list