Главная
Study mode:
on
1
Enabling Composable Scalable Memory for AI Inference with CXL Switch
Description:
Learn how CXL 2.0 switch technology enables composable and scalable memory systems for AI inference workloads in this technical presentation from Xconn Technologies and H3 Platform executives. Explore the architecture, configuration, and components of a real composable memory system designed to address the substantial memory demands of Large Language Models (LLM). Discover the working mechanisms behind CXL 2.0-based systems becoming available in 2024, examine their performance characteristics, and understand how these systems enhance AI inference performance through practical demonstrations and architectural insights.

Enabling Composable Scalable Memory for AI Inference with CXL Switch

Open Compute Project
Add to list