Главная
Study mode:
on
1
Day 1 09:00: Characterizing Communication in Distributed Parameter-Efficient-Fine-Tuning for LLMs
Description:
Watch a 29-minute technical presentation from the HOTI Hot Interconnects Symposium exploring the communication patterns and characteristics in distributed Parameter-Efficient Fine-Tuning (PEFT) approaches for Large Language Models. Led by researchers Nawras Alnaasan, Horng-Ruey Huang, Aamir Shafi, Hari Subramoni and Dhabaleswar K. Panda, and chaired by AMD's Shelby Lockhart, learn about the networking and interconnect challenges involved in efficiently fine-tuning massive language models across distributed systems. Gain insights into optimizing communication overhead and scaling PEFT methods as part of the Networks for Large Language Models technical paper session.

Characterizing Communication in Distributed Parameter-Efficient Fine-Tuning for LLMs

HOTI - Hot Interconnects Symposium
Add to list