Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Just-in-time compilation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Performance == JIT causes a slight to noticeable delay in the initial execution of an application, due to the time taken to load and compile the input code. Sometimes this delay is called "startup time delay" or "warm-up time". In general, the more optimization JIT performs, the better the code it will generate, but the initial delay will also increase. A JIT compiler therefore has to make a trade-off between the compilation time and the quality of the code it hopes to generate. Startup time can include increased IO-bound operations in addition to JIT compilation: for example, the ''rt.jar'' class data file for the [[Java virtual machine|Java Virtual Machine]] (JVM) is 40 MB and the JVM must seek a lot of data in this contextually huge file.<ref name="Haase" /> One possible optimization, used by Sun's [[HotSpot (virtual machine)|HotSpot]] Java Virtual Machine, is to combine interpretation and JIT compilation. The application code is initially interpreted, but the JVM monitors which sequences of [[bytecode]] are frequently executed and translates them to machine code for direct execution on the hardware. For bytecode which is executed only a few times, this saves the compilation time and reduces the initial latency; for frequently executed bytecode, JIT compilation is used to run at high speed, after an initial phase of slow interpretation. Additionally, since a program spends most time executing a minority of its code, the reduced compilation time is significant. Finally, during the initial code interpretation, execution statistics can be collected before compilation, which helps to perform better optimization.<ref name="HotSpot" /> The correct tradeoff can vary due to circumstances. For example, Sun's Java Virtual Machine has two major modes—client and server. In client mode, minimal compilation and optimization is performed, to reduce startup time. In server mode, extensive compilation and optimization is performed, to maximize performance once the application is running by sacrificing startup time. Other Java just-in-time compilers have used a runtime measurement of the number of times a method has executed combined with the bytecode size of a method as a heuristic to decide when to compile.<ref name="Schilling" /> Still another uses the number of times executed combined with the detection of loops.<ref name="Suganuma" /> In general, it is much harder to accurately predict which methods to optimize in short-running applications than in long-running ones.<ref name="Arnold-2000" /> [[Native Image Generator]] (Ngen) by [[Microsoft]] is another approach at reducing the initial delay.<ref name="MSDN" /> Ngen pre-compiles (or "pre-JITs") bytecode in a [[Common Intermediate Language]] image into machine native code. As a result, no runtime compilation is needed. [[.NET Framework]] 2.0 shipped with [[Visual Studio 2005]] runs Ngen on all of the Microsoft library DLLs right after the installation. Pre-jitting provides a way to improve the startup time. However, the quality of code it generates might not be as good as the one that is JITed, for the same reasons why code compiled statically, without [[profile-guided optimization]], cannot be as good as JIT compiled code in the extreme case: the lack of profiling data to drive, for instance, inline caching.<ref name="Arnold-2005" /> There also exist Java implementations that combine an [[ahead-of-time compilation|AOT (ahead-of-time) compiler]] with either a JIT compiler ([[Excelsior JET]]) or interpreter ([[GNU Compiler for Java]]). JIT compilation may not reliably achieve its goal, namely entering a steady state of improved performance after a short initial warmup period.{{sfn|Barrett|Bolz-Tereick|Killick|Mount|2017|p=3}}{{sfn|Traini|Cortellessa|Di Pompeo|Tucci|2022|p=1}} Across eight different virtual machines, {{harvtxt|Barrett|Bolz-Tereick|Killick|Mount|Tratt|2017}} measured six widely-used [[microbenchmarks]] which are commonly used by virtual machine implementors as optimisation targets, running them repeatedly within a single process execution.{{sfn|Barrett|Bolz-Tereick|Killick|Mount|2017|p=5-6}} On [[Linux]], they found that 8.7% to 9.6% of process executions failed to reach a steady state of performance, 16.7% to 17.9% entered a steady state of ''reduced'' performance after a warmup period, and 56.5% pairings of a specific virtual machine running a specific benchmark failed to consistently see a steady-state non-degradation of performance across multiple executions (i.e., at least one execution failed to reach a steady state or saw reduced performance in the steady state). Even where an improved steady-state was reached, it sometimes took many hundreds of iterations.{{sfn|Barrett|Bolz-Tereick|Killick|Mount|2017|p=12-13}} {{harvtxt|Traini|Cortellessa|Di Pompeo|Tucci|2022}} instead focused on the HotSpot virtual machine but with a much wider array of benchmarks,{{sfn|Traini|Cortellessa|Di Pompeo|Tucci|2022|p=17-23}} finding that 10.9% of process executions failed to reach a steady state of performance, and 43.5% of benchmarks did not consistently attain a steady state across multiple executions.{{sfn|Traini|Cortellessa|Di Pompeo|Tucci|2022|p=26-29}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)