Linux Foundation
Efficient and Portable AI/LLM Inference on the Edge Cloud - Workshop
Explore efficient AI/LLM inference on edge cloud using WebAssembly. Learn to create lightweight, portable applications for media processing, vision, and language models, running at native speed across CPUs and GPUs.