Solution: Don't allocate objects for communication
6
Why does the program's speed depend on the memory access patter?
7
Cache memories - data locality (example)
8
Cache memories - prefetcher (example)
9
Cache memories - cache line (example)
10
Summary: Memory Access Performance
11
Experiment with class size and member layout
12
Principles of cache-aware software design (1/4)
13
Array of values vs array of pointers
14
Array of pointer performance
15
Principles of cache-aware software design (2/4)
16
Small vs Large classes memory layout
17
Class Size, Data Layout and Performance
18
Principles of cache-aware software design (3/4)
19
Binary Tree Example Binary Tree a data structure used for tast lookup to check if the value is
Description:
Explore the performance implications of dynamic memory usage in C++ through this comprehensive CppNow 2021 conference talk. Delve into the costs associated with allocating and deallocating memory, as well as the impact of memory access patterns on program speed. Learn about system allocators, memory fragmentation, and custom allocators for STL containers. Discover techniques to improve performance, including optimizing data arrangement and memory access patterns. Examine cache memories, data locality, prefetching, and cache line effects through practical examples. Gain insights into cache-aware software design principles, comparing array of values vs. array of pointers, and analyzing class size and member layout. Apply these concepts to real-world scenarios, such as optimizing binary tree implementations for faster lookups.