Главная
Study mode:
on
1
Intro
2
Example: Shared Memory + Network Provider
3
Complex, Theoretical Scenarios
4
SHARED MEMORY ACCELERATION
5
PEER PROVIDER EXAMPLES
6
SHARED COMPLETION QUEUE API
7
SHARED RECEIVE CONTEXT
8
EXAMPLE SRX FLOW
9
OFI COLLECTIVE API
10
IMPLEMENTATION CONSIDERATIONS
11
COLLECTIVE OFFLOAD WITH PEER PROVIDER
12
DESIGN OVERVIEW
13
COLLECTIVE GROUP CREATION
14
JOIN COLLECTIVE GROUP
15
COLLECTIVE OPS
16
BOOTSTRAP COLLECTIVE
17
UTILITY COLLECTIVE PROVIDER
18
CONCLUSION AND FUTURE WORK
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Learn about peer provider composability in libfabric through a comprehensive technical talk series featuring three interconnected presentations from Intel Corporation experts. Explore the architecture and API design of libfabric's peer provider system, which enables applications to leverage multiple network technologies simultaneously for optimal performance. Dive into practical implementations, including how shared memory providers pair with scale-out providers and how focused providers integrate with core providers for collective operations. Master the intricacies of the peer APIs, shared completion queues, shared receive contexts, and collective offloading mechanisms. Understand how this framework allows independent development of specialized providers that can work together seamlessly, supporting complex scenarios involving local node accelerations, GPU fabrics, HPC NICs, and various network transport configurations. Gain insights into the future of network communication as systems become increasingly heterogeneous and require sophisticated solutions for maximum performance. Read more

Peer Provider Composability and API Design in libfabric

OpenFabrics Alliance
Add to list