Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Optimizing compiler
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Code generator optimizations === ;[[Register allocation]]: The most frequently used variables should be kept in processor registers for the fastest access. To find which variables to put in registers, an interference-graph is created. Each variable is a vertex and when two variables are used at the same time (have an intersecting liverange) they have an edge between them. This graph is colored using for example [[Chaitin's algorithm]] using the same number of colors as there are registers. If the coloring fails one variable is "spilled" to memory and the coloring is retried. ;[[Instruction selection]]: Most architectures, particularly [[Complex instruction set computer|CISC]] architectures and those with many [[addressing mode]]s, offer several different ways of performing a particular operation, using entirely different sequences of instructions. The job of the instruction selector is to do a good job overall of choosing which instructions to implement which operators in the low-level [[intermediate representation]] with. For example, on many processors in the [[68000 family]] and the x86 architecture, complex addressing modes can be used in statements like <code>lea 25(a1,d5*4), a0</code>, allowing a single instruction to perform a significant amount of arithmetic with less storage. ;[[Instruction scheduling]]: Instruction scheduling is an important optimization for modern pipelined processors, which avoids stalls or bubbles in the pipeline by clustering instructions with no dependencies together, while being careful to preserve the original semantics. ;[[Rematerialization]]: Rematerialization recalculates a value instead of loading it from memory, eliminating an access to memory. This is performed in tandem with register allocation to avoid spills. ;Code factoring: If several sequences of code are identical, or can be parameterized or reordered to be identical, they can be replaced with calls to a shared subroutine. This can often share code for subroutine set-up and sometimes tail-recursion.<ref name="keil">Cx51 Compiler Manual, version 09.2001, p. 155, Keil Software Incorporated.</ref> ;[[Trampoline (computing)|Trampolines]]: Many{{Citation needed|date=January 2018}} CPUs have smaller subroutine call instructions to access low memory. A compiler can save space by using these small calls in the main body of code. Jump instructions in low memory can access the routines at any address. This multiplies space savings from code factoring.<ref name="keil" /> ;Reordering computations: Based on [[integer linear programming]], restructuring compilers enhance data locality and expose more parallelism by reordering computations. Space-optimizing compilers may reorder code to lengthen sequences that can be factored into subroutines.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)