Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Optimizing compiler
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Common themes == Optimization includes the following, sometimes conflicting themes. ;Optimize the common case: The common case may have unique properties that allow a ''[[fast path]]'' at the expense of a ''slow path''. If the fast path is taken more often, the result is better overall performance. ;Avoid redundancy: Reuse results that are already computed and store them for later use, instead of recomputing them. ;Less code: Remove unnecessary computations and intermediate values. Less work for the CPU, cache, and memory usually results in faster execution. Alternatively, in [[embedded systems]], less code brings a lower product cost. ;Fewer jumps by using ''straight line code'', also called ''[[branch-free code]]'': Less complicated code. Jumps (conditional or [[unconditional branch]]es) interfere with the prefetching of instructions, thus slowing down code. Using inlining or loop unrolling can reduce branching, at the cost of increasing [[binary file]] size by the length of the repeated code. This tends to merge several [[basic block]]s into one. ;Locality: Code and data that are accessed closely together in time should be placed close together in memory to increase spatial [[locality of reference]]. ;Exploit the memory hierarchy: Accesses to memory are increasingly more expensive for each level of the [[memory hierarchy]], so place the most commonly used items in registers first, then caches, then main memory, before going to disk. ;Parallelize: Reorder operations to allow multiple computations to happen in parallel, either at the instruction, memory, or thread level. ;More precise information is better: The more precise the information the compiler has, the better it can employ any or all of these optimization techniques. ;Runtime metrics can help: Information gathered during a test run can be used in [[profile-guided optimization]]. Information gathered at runtime, ideally with minimal [[computational overhead|overhead]], can be used by a [[Just-in-time compilation|JIT]] compiler to dynamically improve optimization. ;Strength reduction: Replace complex, difficult, or expensive operations with simpler ones. For example, replacing division by a constant with multiplication by its reciprocal, or using [[induction variable analysis]] to replace multiplication by a loop index with addition.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)