Главная
Study mode:
on
1
Intro
2
Performance tuning
3
Performance tuning internal
4
Bayesian optimization
5
How Bayesian optimization works
6
What does it do
7
How it finds
8
Optimality
9
Autotune
10
What is Autotune
11
What is Graal
12
Open JDK
13
Inlining parameters
14
Twitters Quest for Holy Grail
15
PS scavenge cycles
16
User CPU time
17
Ranges
18
Test setup
19
Objective
20
Constraints
21
Experiments
22
Results
23
Results Table
24
Results Chart
25
Maximum Landing Site
26
Low Level Graph
27
Verification Experiment
28
Data Visualization
29
CPU Time
30
Latency
31
Performance improvements
32
Experiment 2 Social Graph
33
Experiment 3 Social Graph
34
Experiment 4 Orange Control
35
Experiment 4 Results
36
Verification Run
37
Social Graph
38
Autotune Social Graph
39
Autotune parameters
40
Inlining
41
Evaluation
42
Outcome
43
Max in line size
44
Inline small code
45
Autotuned
Description:
Explore performance tuning techniques for Twitter services using Graal and Machine Learning in this 52-minute Devoxx conference talk. Dive into the successful implementation of Graal at Twitter, which has significantly reduced datacenter costs. Learn about the Autotune Machine Learning framework and its application in optimizing Graal inlining parameters. Discover the principles of Bayesian optimization and its role in finding optimal performance settings. Examine real-world experiments, including PS scavenge cycles, user CPU time, and latency improvements. Analyze detailed results through tables, charts, and low-level graphs. Gain insights into Twitter's quest for the "Holy Grail" of performance optimization, and understand how Autotune parameters impact inlining and overall system efficiency. Witness the practical applications of these techniques across various Twitter services, including Social Graph and Orange Control experiments.

Performance Tuning Twitter Services with Graal and Machine Learning

Devoxx
Add to list