-
Notifications
You must be signed in to change notification settings - Fork 124
Description
The existing JIT compilation process depends on heuristics that utilize fixed thresholds (#159) to determine when to transition from interpretation to JIT compilation when executing RISC-V instructions. This approach lacks flexibility and leads to inconsistent performance patterns. Consequently, there is a clear need for a more pragmatic method that involves gathering profiling data during interpretation. Furthermore, we require a defined strategy for making this transition based on the sampled data rather than relying on predetermined thresholds.
Let's consider the Java virtual machine (JVM), particularly HotSpot, which has a crucial objective: generating efficient machine code while minimizing runtime costs. To accomplish this, HotSpot employs a range of strategies, including tiered compilation, dynamic profiling, speculation, deoptimization, and various compiler optimizations, both architecture-specific and architecture-independent.
Typically, the execution of a method begins in the interpreter, which is the simplest and most cost-effective means available to HotSpot for executing code. During method execution, whether through interpretation or in compiled form, the dynamic profile of the method is collected through instrumentation. This profile is then used by several heuristics to make decisions, such as whether the method should be compiled, recompiled at a different optimization level, and which optimizations should be applied.
When an application starts, the JVM initially interprets all bytecode while gathering profiling information about it. The JIT compiler then leverages this collected profiling information to identify hotspots. Initially, the JIT compiler compiles frequently executed code sections with C1 to rapidly achieve native code performance. Later, as more profiling information becomes available, C2 comes into play. C2 recompiles the code with more aggressive and time-intensive optimizations to further enhance performance.

Another advantageous aspect of tiered compilation is the acquisition of more accurate profiling information. Prior to tiered compilation, the JVM collected profiling information only during interpretation. However, with tiered compilation enabled, the JVM also gathers profiling information on the code compiled with C1. As the compiled code delivers better performance, it allows the JVM to accumulate more profiling samples.
Reference: