Adaptive Optimization in the Jalapeño JVM Matthew Arnold Stephen Fink David Grove Michael Hind...
-
Upload
priscilla-shields -
Category
Documents
-
view
220 -
download
3
Transcript of Adaptive Optimization in the Jalapeño JVM Matthew Arnold Stephen Fink David Grove Michael Hind...
Adaptive Optimization in the Jalapeño JVM
Matthew Arnold
Stephen Fink
David Grove
Michael Hind
Peter F. Sweeney
Source: CS598 @ UIUC
Talk overview
Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues Feedback-directed inlining Conclusion
Background
Three waves of JVMs:– First: Compile method when first encountered; use
fixed set of optimizations– Second: Determine hot methods dynamically and
compile them with more advanced optimizations– Third: Feedback-directed optimizations
Jalapeño JVM targets third wave, but current implementation is second wave
Jalapeño JVM
Written in Java (core services precompiled to native code in boot image)
Compiles at four levels: baseline, 0, 1, & 2 Compile-only strategy (no interpretation) Yield points for quasi-preemptive switching
Talk progress
Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues Feedback-directed inlining Conclusion
Adaptive Optimization System
AOS: Design
“Distributed, asynchronous, object-oriented design” useful for managing lots of data, say authors
Each successive pipeline (from raw data to compilation decisions) performs increasingly complex analysis on decreasing amounts of data
Talk progress
Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Other issues Feedback-directed inlining Conclusion
Multi-level recompilation
Multi-level recompilation:Sampling
Sampling occurs on thread switch Thread switch triggered by clock interrupt Thread switch can occur only at yield points Yield points are method invocations and loop
back edges
Discussion: Is this approach biased?
Multi-level recompilation:Biased sampling
Code with no method calls or back edges
Short method
Long method
method call
method call
Multi-level recompilation: Cost-benefit analysis
Method m compiled at level i; estimate:– Ti, expected time program will spend executing m if
m not recompiled– Cj, the cost of recompiling m at optimization level j,
for i ≤ j ≤ N.– Tj, expected time program will spend executing
method m if m recompiled at level j.– If, for best j, Cj + Tj < Ti, recompile m at level j.
Multi-level recompilation: Cost-benefit analysis (continued)
Estimate Ti :
Ti = Tf * Pm
Tf is the future running time of the program
We estimate that the program will run for as long as it has run so far
Multi-level recompilation: Cost-benefit analysis (continued)
Pm is the percentage of Tf spent in m
Pm estimated from sampling Sample frequencies decay over time.
– Why is this a good idea?– Could it be a disadvantage in certain cases?
Multi-level recompilation: Cost-benefit analysis (continued)
Statically-measured speedups Si and Si used to determine Tj:
Tj = Ti * Si / Sj
– Statically-measured speedups?!– Is there any way to do better?
Multi-level recompilation: Cost-benefit analysis (continued)
Cj (cost of recompilation) estimated using a linear model of speed for each optimization level:
Cj = aj * size(m), where aj = constant for level j
Is it reasonable to assume a linear model? OK to use statically-determined aj?
Multi-level recompilation:Results
Multi-level recompilation:Results (continued)
Multi-level recompilation: Discussion
Adaptive multi-level compilation does better than JIT at any level in short term.
But in the long run, performance is slightly worse than JIT compilation.
The primary target is server applications, which tend to run for a long time.
Multi-level recompilation: Discussion (continued)
So what’s so great about Jalapeño’s AOS?
– Current AOS implementation gives good results for both short and long term – JIT compiler can’t do both cases well because optimization level is fixed.
– The AOS can be extended to support feedback-directed optimizations such as
fragment creation (i.e., Dynamo) determining if an optimization was effective
Talk progress
Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues Feedback-directed inlining Conclusion
Miscellaneous issues:Multiprocessing
Authors say that if a processor is idle, recompilation can be done almost for free.
– Why almost for free?– Are there situations when you could get free
recompilation on a uniprocessor?
Miscellaneous issues:Models vs. heuristics
Authors moving toward “analytic model of program behavior” and elimination of ad-hoc tuning parameters.
Tuning parameters proved difficult because of “unforeseen differences in application behavior.”
Is it believable that ad-hoc parameters can be eliminated and replaced with models?
Miscellaneous issues:More intrusive optimizations
The future of Jalapeño is more intrusive optimizations, such as compiler-inserted instrumentation for profiling
Advantages and disadvantages compared with current system?
– Advantages: Performance gains in the long term Adjusts to phased behavior
– Disadvantages: Unlike with sampling, you can’t profile all the time Harder to adaptively throttle overhead
Miscellaneous:Stack frame rewriting
In the future, Jalapeño will support rewriting of a baseline stack frame with an optimized stack frame
Authors say that rewriting an optimized stack frame with an optimized stack frame is more difficult?– Why?
Talk progress
Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues Feedback-directed inlining Conclusion
Feedback-directed inlining
Feedback-directed inlining: More cost-benefit analysis
Boost factor estimated:– Boost factor b is a function of
1. The fraction f of dynamic calls attributed to the call edge in the sampling-approximated call graph
2. Estimate s of the benefit (i.e., speedup) from eliminating virtually all calls from the program
– Presumably something like b = f * s.
Feedback-directed inlining: Results
Why?Why?
Talk progress
Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Other issues Feedback-directed inlining Conclusion
Conclusion
AOS designed to support feedback-directed optimizations (third wave)
Current AOS implementation only supports selective optimizations (second wave)
– Improves short-term performance without hurting long term– Uses mix of cost-benefit model and ad-hoc methods.
Future work will use more intrusive performance monitoring (e.g., instrumentation for path profiling, checking that an optimization improved performance)