Explore performance tuning techniques for Twitter services using Graal and Machine Learning in this 52-minute Devoxx conference talk. Dive into the successful implementation of Graal at Twitter, which has significantly reduced datacenter costs. Learn about the Autotune Machine Learning framework and its application in optimizing Graal inlining parameters. Discover the principles of Bayesian optimization and its role in finding optimal performance settings. Examine real-world experiments, including PS scavenge cycles, user CPU time, and latency improvements. Analyze detailed results through tables, charts, and low-level graphs. Gain insights into Twitter's quest for the "Holy Grail" of performance optimization, and understand how Autotune parameters impact inlining and overall system efficiency. Witness the practical applications of these techniques across various Twitter services, including Social Graph and Orange Control experiments.
Performance Tuning Twitter Services with Graal and Machine Learning