Главная
Study mode:
on
1
Intro
2
Modern datacenter networks are fast
3
Existing networking options sacrifice performance or generality
4
Specialization for fast networking
5
eRPC provides both speed and generality
6
Managing packet loss
7
In low-latency networks, switch buffers prevent most loss
8
All modern switches have buffers BDP
9
Low-overhead transport layer
10
Example: Optimized DMA buffer management for rare packet loss
11
Example: Efficient congestion control in software
12
Datacenter networks are usually uncongested
13
Congestion control, fast and slow
14
Easy integration with existing applications
15
Takeaway: Given fast packet 10, we can provide fast networking in software
16
Together, common case optimizations matter
Description:
Explore a groundbreaking approach to datacenter networking that challenges the belief that high performance requires sacrificing generality. Dive into the innovative eRPC library, which offers performance comparable to specialized systems while running on commodity CPUs in traditional datacenter networks. Learn how eRPC excels in message rate for small messages, bandwidth for large messages, and scalability to numerous nodes and CPU cores. Discover its ability to handle packet loss, congestion, and background request execution. Examine impressive microbenchmark results, including one CPU core handling up to 10 million small RPCs per second and sending large messages at 75 Gbps. Investigate the successful port of a production-grade Raft state machine replication implementation to eRPC, achieving 5.5 microseconds of replication latency on lossy Ethernet. Gain insights into modern datacenter networks, low-overhead transport layers, efficient congestion control, and the importance of common case optimizations in fast networking software. Read more

Datacenter RPCs Can Be General and Fast

USENIX
Add to list