Главная
Study mode:
on
1
Intro
2
Scale of Data at Credit Karma
3
Data Warehouse Import
4
Warehouse import problems
5
Akka Streams for Fun and Profit
6
Akka Streams. Easy to Unit Test
7
Akka Streams Built-in Stages
8
Warehouse Import with Streams
9
Warehouse Import Stream Code
10
Analytics Export with Streams
11
Analytics Export Stream Code
12
Analytics Export Heap Space Pull
13
Warehouse Import Garbage Collection
14
Benchmarking Setup
15
Benchmarking test Baseline Test, read parse XML
16
Benchmarking Results
17
Benchmarking Code - Akka Actors
18
Benchmarking Code - Akka Streams
19
Garbage Collection Times
20
Learnings
Description:
Explore high-throughput data processing using Akka Streams in this 41-minute conference talk from Scala Days Copenhagen 2017. Discover how Credit Karma, with over 60 million members, tackled the challenges of large-scale data transfer. Learn about the initial implementation using Akka Actors, the limitations encountered, and the transition to Akka Streams. Gain insights into backpressure, warning signs of system capacity limits, and when to consider Akka Streams for data transfer. Examine best practices, optimizations, and real-world examples of warehouse import and analytics export implementations. Dive into benchmarking results, garbage collection considerations, and key learnings from Credit Karma's experience. Ideal for developers and companies building or optimizing high-throughput Akka Actor systems.

Akka Streams for High Throughput Data Processing

Scala Days Conferences
Add to list