Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Compiler
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Middle end ==== The middle end, also known as ''optimizer,'' performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code.<ref name="Hjort Blindell, Gabriel">{{Cite book |title=Instruction selection: Principles, methods, and applications |publisher=Springer |last= Blindell |first=Gabriel Hjort |isbn=978-3-31934019-7 |location=Switzerland |oclc=951745657 |date=2016-06-03}}</ref> The middle end contains those optimizations that are independent of the CPU architecture being targeted. The main phases of the middle end include the following: * [[Compiler analysis|Analysis]]: This is the gathering of program information from the intermediate representation derived from the input; [[data-flow analysis]] is used to build [[use-define chain]]s, together with [[dependence analysis]], [[alias analysis]], [[pointer analysis]], [[escape analysis]], etc. Accurate analysis is the basis for any compiler optimization. The [[control-flow graph]] of every compiled function and the [[call graph]] of the program are usually also built during the analysis phase. * [[Compiler optimization|Optimization]]: the intermediate language representation is transformed into functionally equivalent but faster (or smaller) forms. Popular optimizations are [[inline expansion]], [[dead-code elimination]], [[constant propagation]], [[loop transformation]] and even [[automatic parallelization]]. Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, [[dependence analysis]] is crucial for [[loop transformation]]. The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a [[basic block]], to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, [[peephole optimization]]s are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, [[interprocedural optimization]] requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously. Interprocedural analysis and optimizations are common in modern commercial compilers from [[Hewlett-Packard|HP]], [[IBM]], [[Silicon Graphics|SGI]], [[Intel]], [[Microsoft]], and [[Sun Microsystems]]. The [[free software]] [[GNU Compiler Collection|GCC]] was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is [[Open64]], which is used by many organizations for research and commercial purposes. Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)