Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inline expansion
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Optimization replacing a function call with that function's source code}} {{Use American English|date=March 2019}} {{More citations needed|date=December 2013}} In [[computing]], '''inline expansion''', or '''inlining''', is a manual or [[compiler optimization]] that replaces a function [[call site]] with the body of the called function. Inline expansion is similar to [[macro expansion]], but occurs during compiling, without changing the [[source code]] (the text), while macro expansion occurs before compiling, and results in different text that is then processed by the [[compiler]]. Inlining is an important optimization, but has complex effects on performance.{{sfn|Chen|Chang|Conte|Hwu|1993}} As a [[rule of thumb]], some inlining will improve speed at very minor cost of space, but excess inlining will hurt speed, due to inlined code consuming too much of the [[instruction cache]], and also cost significant space. A survey of the modest academic literature on inlining from the 1980s and 1990s is given in Peyton Jones & Marlow 1999.{{sfn|Peyton Jones|Marlow|1999|loc=8. Related work, p. 17}} ==Overview== Inline expansion is similar to macro expansion as the compiler places a new copy of the function in each place it is called. Inlined functions run a little faster than the normal functions as function-calling-overheads are saved, however, there is a memory penalty. If a function is inlined 10 times, there will be 10 copies of the function inserted into the code. Hence inlining is best for small functions that are called often. In C++ the member functions of a class, if defined within the class definition, are inlined by default (no need to use the ''inline'' [[reserved word]] (keyword)); otherwise, the keyword is needed. The compiler may ignore the programmer’s attempt to inline a function, mainly if it is particularly large. Inline expansion is used to eliminate the time overhead (excess time) when a function is called. It is typically used for functions that execute frequently. It also has a space benefit for very small functions, and is an enabling transformation for other [[Optimization (computer science)|optimizations]]. Without inline functions, the [[compiler]] decides which functions to inline. The programmer has little or no control over which functions are inlined and which are not. Giving this degree of control to the programmer allows for the use of application-specific knowledge in choosing which functions to inline. Ordinarily, when a function is invoked, [[control flow|control]] is transferred to its definition by a [[branch (computer science)|branch]] or call instruction. With inlining, control drops through directly to the code for the function, without a branch or call instruction. [[Compiler]]s usually implement [[Statement (computer science)|statements]] with inlining. Loop conditions and loop bodies need [[lazy evaluation]]. This property is fulfilled when the code to compute loop conditions and loop bodies is inlined. Performance considerations are another reason to inline statements. In the context of [[functional programming]] languages, inline expansion is usually followed by the [[Lambda calculus#.CE.B2-reduction|beta-reduction]] transformation. A programmer might inline a function manually through [[copy-and-paste programming]], as a one-time operation on the [[source code]]. However, other methods of controlling inlining (see below) are preferable, because they do not precipitate bugs arising when the programmer overlooks a (possibly modified) duplicated version of the original function body, while fixing a bug in the inlined function. ==Effect on performance== The direct effect of this optimization is to improve time performance (by eliminating call overhead), at the cost of worsening space usage{{efn|Space usage is "number of instructions", and is both runtime space usage and the [[binary file]] size.}} (due to [[code duplication|duplicating]] the function body). The code expansion due to duplicating the function body dominates, except for simple cases,{{efn|Code size actually shrinks for very short functions, where the call overhead is larger than the body of the function, or single-use functions, where no duplication occurs.}} and thus the direct effect of inline expansion is to improve time at the cost of space. However, the main benefit of inline expansion is to allow further optimizations and improved scheduling, due to increasing the size of the function body, as better optimization is possible on larger functions.{{sfn|Chen|Chang|Conte|Hwu|1993|loc=3.4 Function inline expansion, p. 14}} The ultimate impact of inline expansion on speed is complex, due to multiple effects on performance of the memory system (mainly [[instruction cache]]), which dominates performance on modern processors: depending on the specific program and cache, inlining particular functions can increase or decrease performance.{{sfn|Chen|Chang|Conte|Hwu|1993}} The impact of inlining varies by [[programming language]] and program, due to different degrees of abstraction. In lower-level imperative languages such as [[C (programming language)|C]] and [[Fortran]] it is typically a 10–20% speed boost, with minor impact on code size, while in more abstract languages it can be significantly more important, due to the number of layers inlining removes, with an extreme example being [[Self (programming language)|Self]], where one compiler saw improvement factors of 4 to 55 by inlining.{{sfn|Peyton Jones|Marlow|1999|loc=8. Related work, p. 17}} The direct benefits of eliminating a function call are: * It eliminates instructions needed for a [[function call]], both in the calling function and in the callee: placing arguments on a [[Stack-based memory allocation|stack]] or in [[Processor register|registers]], the function call itself, the [[function prologue]], then at return the [[function epilogue]], the [[return statement]], and then getting the return value back, and removing arguments from stacks and restoring registers (if needed). * Due to not needing registers to pass arguments, it reduces [[register spilling]]. * It eliminates having to pass references and then dereference them, when using [[call by reference]] (or [[call by address]], or [[call by sharing]]). The main benefit of inlining, however, is the further optimizations it allows. Optimizations that cross function boundaries can be done without requiring [[interprocedural optimization]] (IPO): once inlining has been performed, added ''intra''procedural optimizations ("global optimizations") become possible on the enlarged function body. For example: * A [[Constant (computer programming)|constant]] passed as an argument can often be propagated to all instances of the matching parameter, or part of the function may be "hoisted out" of a loop (via [[loop-invariant code motion]]). * [[Register allocation]] can be done across the larger function body. * High-level optimizations, such as [[escape analysis]] and [[tail duplication]], can be performed on a larger scope and be more effective, more so if the compiler implementing those optimizations relies on mainly intra-procedural analysis.<ref name="prokopec2019"></ref> These can be done without inlining, but require a significantly more complex compiler and linker (in case caller and callee are in separate compiling units). Conversely, in some cases a language specification may allow a program to make added assumptions about arguments to procedures that it can no longer make after the procedure is inlined, preventing some optimizations. Smarter compilers (such as [[Glasgow Haskell Compiler]] (GHC)) will track this, but naive inlining loses this information. A further benefit of inlining for the memory system is: * Eliminating branches and keeping code that is executed close together in memory improves instruction cache performance by improving [[locality of reference]] (spatial locality and sequentiality of instructions). This is smaller than optimizations that specifically target sequentiality, but is significant.{{sfn|Chen|Chang|Conte|Hwu|1993|loc=3.4 Function inline expansion, p. 19–20}} The direct cost of inlining is increased code size, due to duplicating the function body at each call site. However, it does not always do so, namely in case of very short functions, where the function body is smaller than the size of a function call (at the caller, including argument and return value handling), such as trivial [[accessor method]]s or [[mutator method]]s (getters and setters); or for a function that is only used in one place, in which case it is not duplicated. Thus inlining may be minimized or eliminated if optimizing for code size, as is often the case in [[embedded system]]s. Inlining also imposes a cost on performance, due to the code expansion (due to duplication) hurting instruction cache performance.<ref name="webkit">{{cite web |url=https://www.webkit.org/blog/2826/unusual-speed-boost-size-matters/ |title=Unusual speed boost: size matters |author=Benjamin Poulain |date=August 8, 2013}}</ref> This is most significant if, before expansion, the [[working set]] of the program (or a hot section of code) fit in one level of the memory hierarchy (e.g., [[L1 cache]]), but after expansion it no longer fits, resulting in frequent cache misses at that level. Due to the significant difference in performance at different levels of the hierarchy, this hurts performance considerably. At the highest level this can result in increased [[page fault]]s, catastrophic performance degradation due to [[thrashing (computer science)|thrashing]], or the program failing to run at all. This last is rare in common desktop and server applications, where code size is small relative to available memory, but can be an issue for resource-constrained environments such as embedded systems. One way to mitigate this problem is to split functions into a smaller hot inline path ([[fast path]]), and a larger cold non-inline path (slow path).<ref name="webkit"/> Inlining hurting performance is a problem for mainly large functions that are used in many places, but the break-even point beyond which inlining reduces performance is difficult to determine and depends in general on precise load, so it can be subject to manual optimization or [[profile-guided optimization]].<ref>See for example the [http://jikesrvm.org/Adaptive+Optimization+System Adaptive Optimization System] {{Webarchive|url=https://web.archive.org/web/20110809144146/http://jikesrvm.org/Adaptive+Optimization+System |date=2011-08-09}} in the [[Jikes RVM]] for Java.</ref> This is a similar issue to other code expanding optimizations such as [[loop unrolling]], which also reduces number of instructions processed, but can decrease performance due to poorer cache performance. The precise effect of inlining on cache performance is complex. For small cache sizes (much smaller than the working set before expansion), the increased sequentiality dominates, and inlining improves cache performance. For cache sizes close to the working set, where inlining expands the working set so it no longer fits in cache, this dominates and cache performance decreases. For cache sizes larger than the working set, inlining has negligible impact on cache performance. Further, changes in cache design, such as [[load forwarding]], can offset the increase in cache misses.{{sfn|Chen|Chang|Conte|Hwu|1993|loc=3.4 Function inline expansion, p. 24–26}} ==Compiler support== Compilers use a variety of mechanisms to decide which function calls should be inlined; these can include manual hints from programmers for specific functions, together with overall control via [[command-line option]]s. Inlining is done automatically by many compilers in many languages, based on judgment of whether inlining is beneficial, while in other cases it can be manually specified via compiler [[Directive (programming)|directives]], typically using a keyword or [[compiler directive]] called <code>inline</code>. Typically this only hints that inlining is desired, rather than requiring inlining, with the force of the hint varying by language and compiler. Typically, compiler developers keep the above performance issues in mind, and incorporate [[heuristics]] into their compilers that choose which functions to inline so as to improve performance, rather than worsening it, in most cases. == Implementation == Once the [[compiler]] has decided to inline a particular function, performing the inlining operation itself is usually simple. Depending on whether a compiler inlines functions across code in different languages, the compiler can inline on either a high-level [[intermediate representation]] (like [[abstract syntax tree]]s) or a low-level intermediate representation. In either case, the compiler simply computes the [[Parameter|arguments]], stores them in variables corresponding to the function's arguments, and then inserts the body of the function at the call site. [[Linker (computing)|Linkers]] can also do function inlining. When a linker inlines functions, it may inline functions whose source is not available, such as library functions (see [[link-time optimization]]). A [[runtime system]] can inline a function also. [[Runtime (program lifecycle phase)|Runtime]] inlining can use dynamic profiling information to make better decisions about which functions to inline, as in the [[HotSpot (virtual machine)|Java HotSpot compiler]].<ref>[https://www.researchgate.net/publication/331408280_An_Optimization-Driven_Incremental_Inline_Substitution_Algorithm_for_Just-in-Time_Compilers] Description of the inliner used in the Graal JIT compiler for Java</ref> Here is a simple example of inline expansion performed "by hand" at the source level in the [[C (programming language)|C language]]: <syntaxhighlight lang="c"> int pred(int x) { if (x == 0) return 0; else return x - 1; } </syntaxhighlight> ''Before inlining:'' <syntaxhighlight lang="c"> int func(int y) { return pred(y) + pred(0) + pred(y+1); } </syntaxhighlight> ''After inlining:'' <syntaxhighlight lang="c"> int func(int y) { int tmp; if (y == 0) tmp = 0; else tmp = y - 1; /* (1) */ if (0 == 0) tmp += 0; else tmp += 0 - 1; /* (2) */ if (y+1 == 0) tmp += 0; else tmp += (y + 1) - 1; /* (3) */ return tmp; } </syntaxhighlight> Note that this is only an example. In an actual C application, it would be preferable to use an inlining language feature such as [[parameterized macro]]s or [[inline function]]s to tell the compiler to transform the code in this way. The next section lists ways to optimize this code. === Inlining by assembly macro expansion === [[Macro assembler#Macros|Assembler macros]] provide an alternative approach to inlining whereby a sequence of instructions can normally be generated inline by macro expansion from a single macro source statement (with zero or more parameters). One of the parameters might be an option to alternatively generate a one-time separate [[subroutine]] containing the sequence and processed instead by an inlined call to the function. Example: MOVE FROM=array1,TO=array2,INLINE=NO === Heuristics === A range of different heuristics have been explored for inlining. Usually, an inlining algorithm has a certain code budget (an allowed increase in program size) and aims to inline the most valuable callsites without exceeding that budget. In this sense, many inlining algorithms are usually modeled after the [[Knapsack problem]].<ref>[https://dl.acm.org/citation.cfm?id=359830] Scheifler, An Analysis of Inline Substitution for a Structured Programming Language</ref> To decide which callsites are more valuable, an inlining algorithm must estimate their benefit—i.e. the expected decrease in the execution time. Commonly, inliners use profiling information about the frequency of the execution of different code paths to estimate the benefits.<ref>[https://dl.acm.org/citation.cfm?id=351416] Matthew Arnold, Stephen Fink, Vivek Sarkar, and Peter F. Sweeney, A Comparative Study of Static and Profile-based Heuristics for Inlining</ref> In addition to profiling information, newer [[just-in-time compiler]]s apply several more advanced heuristics, such as:<ref name="prokopec2019">[https://www.researchgate.net/publication/331408280_An_Optimization-Driven_Incremental_Inline_Substitution_Algorithm_for_Just-in-Time_Compilers] Prokopec et al., An Optimization Driven Incremental Inline Substitution Algorithm for Just-In-Time Compilers, CGO'19 publication about the inliner used in the Graal compiler for the JVM</ref> * Speculating which code paths will result in the best reduction in execution time (by enabling additional compiler optimizations as a result of inlining) and increasing the perceived benefit of such paths. * Adaptively adjusting the benefit-per-cost threshold for inlining based on the size of the compiling unit and the amount of code already inlined. * Grouping subroutines into clusters, and inlining entire clusters instead of singular subroutines. Here, the heuristic guesses the clusters by grouping those methods for which inlining just a proper subset of the cluster leads to a worse performance than inlining nothing at all. == Benefits == Inline expansion itself is an optimization, since it eliminates overhead from calls, but it is much more important as an [[enabling transformation]]. That is, once the compiler expands a function body in the context of its call site—often with arguments that may be fixed [[Constant (mathematics)|constants]]—it may be able to do a variety of transformations that were not possible before. For example, a [[conditional branch]] may turn out to be always true or always false at this particular call site. This in turn may enable [[dead code elimination]], [[loop-invariant code motion]], or [[induction variable elimination]]. <!-- Need to talk more about how to automatically choose which functions to inline Need to talk more about inlining recursive functions --> In the C example in the prior section, optimizing opportunities abound. The compiler may follow this sequence of steps: * The <code>tmp += 0</code> statements in the lines marked (2) and (3) do nothing. The compiler can remove them. * The condition <code>0 == 0</code> is always true, so the compiler can replace the line marked (2) with the consequent, <code>tmp += 0</code> (which does nothing). * The compiler can rewrite the condition <code>y+1 == 0</code> to <code>y == -1</code>. * The compiler can reduce the expression <code>(y + 1) - 1</code> to <code>y</code>. * The expressions <code>y</code> and <code>y+1</code> cannot both equal zero. This lets the compiler eliminate one test. * In statements such as <code>if (y == 0) return y</code> the value of <code>y</code> is known in the body, and can be inlined. The new function looks like: <syntaxhighlight lang="c"> int func(int y) { if (y == 0) return 0; if (y == -1) return -2; return 2*y - 1; } </syntaxhighlight> == Limits == Complete inline expansion is not always possible, due to [[Recursion (computer science)|recursion]]: recursively inline expanding the calls will not terminate. There are various solutions, such as expanding a bounded amount, or analyzing the [[call graph]] and breaking loops at certain nodes (i.e., not expanding some edge in a recursive loop).{{sfn|Peyton Jones|Marlow|1999|loc=4. Ensuring Termination, pp. 6–9}} An identical problem occurs in macro expansion, as recursive expansion does not terminate, and is typically resolved by forbidding recursive macros (as in C and C++). == Comparison with macros == Traditionally, in languages such as [[C (programming language)|C]], inline expansion was accomplished at the source level using [[parameterized macro]]s. Use of true inline functions, as are available in [[C99]], provides several benefits over this approach: * In C, macro invocations do not perform [[type checking]], or even check that arguments are well-formed, whereas function calls usually do. * In C, a macro cannot use the return keyword with the same meaning as a function would do (it would make the function that asked the expansion terminate, rather than the macro). In other words, a macro cannot return anything which is not the result of the last expression invoked inside it. * Since C macros use mere textual substitution, this may result in unintended side-effects and inefficiency due to re-evaluation of arguments and [[order of operations]]. * Compiler errors within macros are often difficult to understand, because they refer to the expanded code, rather than the code the programmer typed. Thus, debugging information for inlined code is usually more helpful than that of macro-expanded code. * Many constructs are awkward or impossible to express using macros, or use a significantly different syntax. Inline functions use the same syntax as regular functions, and can be inlined and un-inlined at will with ease. Many compilers can also inline expand some [[Recursion (computer science)|recursive functions]];<ref>[https://web.archive.org/web/20041013180231/http://home.pipeline.com/~hbaker1/Inlines.html Inlining Semantics for Subroutines which are Recursive]" by Henry G. Baker</ref> recursive macros are typically illegal. [[Bjarne Stroustrup]], the designer of C++, likes to emphasize that macros should be avoided wherever possible, and advocates extensive use of inline functions. == Selection methods == Many compilers aggressively inline functions wherever it is beneficial to do so. Although it can lead to larger [[executable]]s, aggressive inlining has nevertheless become more and more desirable as memory capacity has increased faster than CPU speed. Inlining is a critical optimization in languages for [[Functional programming|functional]] and [[object-oriented programming]], which rely on it to provide enough context for their typically small functions to make classical optimizations effective. == Language support == Many languages, including [[Java (programming language)|Java]] and functional languages, do not provide language constructs for inline functions, but their compilers or [[Interpreter (computing)|interpreters]] often perform aggressive inline expansion.<ref name="prokopec2019"/> Other languages provide constructs for explicit hints, generally as compiler [[Directive (programming)|directives]] (pragmas). The language [[Ada (programming language)|Ada]] has a pragma for inline functions. Functions in [[Common Lisp]] may be defined as inline by the <code>inline</code> declaration as such:<ref>[http://www.lispworks.com/documentation/HyperSpec/Body/d_inline.htm#inline ''Declaration'' '''INLINE''', '''NOTINLINE'''] at the [[Common Lisp HyperSpec]]</ref> <syntaxhighlight lang="lisp"> (declaim (inline dispatch)) (defun dispatch (x) (funcall (get (car x) 'dispatch) x)) </syntaxhighlight> The [[Haskell]] compiler [[Glasgow Haskell Compiler|GHC]] tries to inline functions or values that are small enough but inlining may be noted explicitly using a language pragma:<ref>[http://www.haskell.org/ghc/docs/7.0.4/html/users_guide/pragmas.html 7.13.5.1. INLINE pragma] Chapter 7. GHC Language Features</ref> <syntaxhighlight lang="haskell"> key_function :: Int -> String -> (Bool, Double) {-# INLINE key_function #-} </syntaxhighlight> === C and C++ === {{Further|Inline function}} [[C (programming language)|C]] and [[C++]] have an <code>inline</code> keyword which serves as a hint that inlining may be beneficial; however, in newer versions, its main purpose is instead to alter the visibility and linking behavior of the function. <ref>https://en.cppreference.com/w/cpp/language/inline</ref> === Rust === In [[Rust (programming language)|Rust]], inlining is automatically done by the compiler.<ref name="rust-ref-inline">{{Cite web |title=Code generation - The Rust Reference |url=https://doc.rust-lang.org/nightly/reference/attributes/codegen.html?highlight=inline#r-attributes.codegen.inline |access-date=2025-05-01 |website=doc.rust-lang.org}}</ref> Rust provides an <code>#[inline]</code> attribute that suggests to the compiler that a function should be inlined, but does not guarantee it; the compiler may ignore even <code>#[inline(always)]</code>. In debug mode, the compiler will never inline.<ref name="rust-stddev-inlining">{{Cite web |title=When to #[inline] - Standard library developers Guide |url=https://std-dev-guide.rust-lang.org/policy/inline.html |access-date=2025-05-01 |website=std-dev-guide.rust-lang.org}}</ref> == See also == * [[Macro (computer science)]] * [[Partial evaluation]] * [[Tail-call elimination]] * [[Code outlining]] == Notes == {{Notelist}} == References == {{Reflist}} * {{Cite journal |last1=Chen |first1=W. Y. |last2=Chang |first2=P. P. |last3=Conte |first3=T. M. |last4=Hwu |first4=W. W. |date=September 1993 |title=The effect of code expanding optimizations on instruction cache design |url=http://impact.crhc.illinois.edu/shared/report/crhc-91-17.icache.pdf |journal=IEEE Transactions on Computers |volume=42 |issue=9 |pages=1045–1057 |doi=10.1109/12.241594 |hdl=2142/74513 |hdl-access=free}} * {{cite tech report |last1=Peyton Jones |first1=Simon |author1-link=Simon Peyton Jones |last2=Marlow |first2=Simon |author2-link=Simon Marlow |date=September 1999 |url=http://research.microsoft.com/~simonpj/Papers/inlining/ |title=Secrets of the Glasgow Haskell Compiler Inliner}} == External links == {{Wiktionary|in-line expansion|inlining}} *"[http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.1036 Eliminating Virtual Function Calls in C++ Programs]"; Gerald Aigner, [[Urs Hölzle]] *"[http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.187.7208 Reducing Indirect Function Call Overhead In C++ Programs]"; Brad Calder, Dirk Grumwald *[https://web.archive.org/web/20060907183845/http://www.cs.arizona.edu/alto/Doc/alto.html ALTO - A Link-Time Optimizer for the DEC Alpha] *"[https://web.archive.org/web/20160303171839/http://www.iecc.com/linker/linker11.html Advanced techniques]"; [[John R. Levine]] *"[https://web.archive.org/web/20041010124209/http://www.codeproject.com/tips/gloption.asp Whole Program Optimization with Visual C++ .NET]"; Brandon Bray {{Compiler optimizations}} [[Category:Compiler optimizations]] [[Category:Subroutines]] [[Category:Programming language comparisons]] <!-- Hidden categories below --> [[Category:Articles with example C code]] [[Category:Articles with example Haskell code]] [[Category:Articles with example Lisp (programming language) code]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite journal
(
edit
)
Template:Cite tech report
(
edit
)
Template:Cite web
(
edit
)
Template:Compiler optimizations
(
edit
)
Template:Efn
(
edit
)
Template:Further
(
edit
)
Template:More citations needed
(
edit
)
Template:Notelist
(
edit
)
Template:Reflist
(
edit
)
Template:Sfn
(
edit
)
Template:Short description
(
edit
)
Template:Use American English
(
edit
)
Template:Webarchive
(
edit
)
Template:Wiktionary
(
edit
)