Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Automatic differentiation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Forward and reverse accumulation == === Chain rule of partial derivatives of composite functions === Fundamental to automatic differentiation is the decomposition of differentials provided by the [[chain rule]] of [[partial derivative]]s of [[function composition|composite functions]]. For the simple composition <math display="block">\begin{align} y &= f(g(h(x))) = f(g(h(w_0))) = f(g(w_1)) = f(w_2) = w_3 \\ w_0 &= x \\ w_1 &= h(w_0) \\ w_2 &= g(w_1) \\ w_3 &= f(w_2) = y \end{align}</math> the chain rule gives <math display="block">\frac{\partial y}{\partial x} = \frac{\partial y}{\partial w_2} \frac{\partial w_2}{\partial w_1} \frac{\partial w_1}{\partial x} = \frac{\partial f(w_2)}{\partial w_2} \frac{\partial g(w_1)}{\partial w_1} \frac{\partial h(w_0)}{\partial x}</math> === Two types of automatic differentiation === Usually, two distinct modes of automatic differentiation are presented. * '''forward accumulation''' (also called '''bottom-up''', '''forward mode''', or '''tangent mode''') * '''reverse accumulation''' (also called '''top-down''', '''reverse mode''', or '''adjoint mode''') Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute <math>\partial w_1/ \partial x</math> and then <math>\partial w_2/\partial w_1</math> and lastly <math>\partial y/\partial w_2</math>), while reverse accumulation traverses from outside to inside (first compute <math>\partial y/\partial w_2</math> and then <math>\partial w_2/\partial w_1</math> and lastly <math>\partial w_1/\partial x</math>). More succinctly, * Forward accumulation computes the recursive relation: <math>\frac{\partial w_i}{\partial x} = \frac{\partial w_i}{\partial w_{i-1}} \frac{\partial w_{i-1}}{\partial x}</math> with <math>w_3 = y</math>, and, * Reverse accumulation computes the recursive relation: <math>\frac{\partial y}{\partial w_i} = \frac{\partial y}{\partial w_{i+1}} \frac{\partial w_{i+1}}{\partial w_{i}}</math> with <math>w_0 = x</math>. The value of the partial derivative, called the ''seed'', is propagated forward or backward and is initially <math>\frac{\partial x}{\partial x}=1</math> or <math>\frac{\partial y}{\partial y}=1</math>. Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable <math>x_1,x_2,\dots,x_n</math> a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one (<math>\frac{\partial x_1}{\partial x_1}=1</math>) and of all others to zero (<math>\frac{\partial x_2}{\partial x_1}= \dots = \frac{\partial x_n}{\partial x_1} = 0</math>). In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass. Which of these two types should be used depends on the sweep count. The [[Computational complexity theory|computational complexity]] of one sweep is proportional to the complexity of the original code. * Forward accumulation is more efficient than reverse accumulation for functions {{math|''f'' : '''R'''<sup>''n''</sup> → '''R'''<sup>''m''</sup>}} with {{math|''n'' ≪ ''m''}} as only {{math|''n''}} sweeps are necessary, compared to {{math|''m''}} sweeps for reverse accumulation. * Reverse accumulation is more efficient than forward accumulation for functions {{math|''f'' : '''R'''<sup>''n''</sup> → '''R'''<sup>''m''</sup>}} with {{math|''n'' ≫ ''m''}} as only {{math|''m''}} sweeps are necessary, compared to {{math|''n''}} sweeps for forward accumulation. [[Backpropagation]] of errors in multilayer perceptrons, a technique used in [[machine learning]], is a special case of reverse accumulation.<ref name="baydin2018automatic" /> Forward accumulation was introduced by R.E. Wengert in 1964.<ref name="Wengert1964"/> According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown.<ref name="grie2012">{{cite book |last=Griewank |first=Andreas |title=Optimization Stories |chapter=Who invented the reverse mode of differentiation? |year=2012 |series=Documenta Mathematica Series |volume= 6|pages=389–400 |doi=10.4171/dms/6/38 |doi-access=free |isbn=978-3-936609-58-5 |chapter-url=https://ftp.gwdg.de/pub/misc/EMIS/journals/DMJDMV/vol-ismp/52_griewank-andreas-b.pdf }}</ref> [[Seppo Linnainmaa]] published reverse accumulation in 1976.<ref name="lin1976">{{cite journal |last=Linnainmaa |first=Seppo |year=1976 |title=Taylor Expansion of the Accumulated Rounding Error |journal=BIT Numerical Mathematics |volume=16 |issue=2 |pages=146–160 |doi=10.1007/BF01931367 |s2cid=122357351 }}</ref> === Forward accumulation === [[File:ForwardAD.png|thumb|Forward accumulation]] In forward accumulation AD, one first fixes the ''independent variable'' with respect to which differentiation is performed and computes the derivative of each sub-[[expression (mathematics)|expression]] recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the ''inner'' functions in the chain rule: <math display="block">\begin{align} \frac{\partial y}{\partial x} &= \frac{\partial y}{\partial w_{n-1}} \frac{\partial w_{n-1}}{\partial x} \\[6pt] &= \frac{\partial y}{\partial w_{n-1}} \left(\frac{\partial w_{n-1}}{\partial w_{n-2}} \frac{\partial w_{n-2}}{\partial x}\right) \\[6pt] &= \frac{\partial y}{\partial w_{n-1}} \left(\frac{\partial w_{n-1}}{\partial w_{n-2}} \left(\frac{\partial w_{n-2}}{\partial w_{n-3}} \frac{\partial w_{n-3}}{\partial x}\right)\right) \\[6pt] &= \cdots \end{align}</math> This can be generalized to multiple variables as a matrix product of [[Jacobian matrix and determinant|Jacobian]]s. Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable <math>w_i</math> is augmented with its derivative <math>\dot w_i</math> (stored as a numerical value, not a symbolic expression), <math display="block">\dot w_i = \frac{\partial w_i}{\partial x}</math> as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule. Using the chain rule, if <math>w_i</math> has predecessors in the computational graph: :<math>\dot w_i = \sum_{j \in \{\text{predecessors of i}\}} \frac{\partial w_i}{\partial w_j} \dot w_j</math> [[Image:ForwardAccumulationAutomaticDifferentiation.png|right|thumb|300px|Figure 2: Example of forward accumulation with computational graph]] As an example, consider the function: <math display="block">\begin{align} y &= f(x_1, x_2) \\ &= x_1 x_2 + \sin x_1 \\ &= w_1 w_2 + \sin w_1 \\ &= w_3 + w_4 \\ &= w_5 \end{align}</math> For clarity, the individual sub-expressions have been labeled with the variables <math>w_i</math>. The choice of the independent variable to which differentiation is performed affects the ''seed'' values {{math|''ẇ''<sub>1</sub>}} and {{math|''ẇ''<sub>2</sub>}}. Given interest in the derivative of this function with respect to {{math|''x''<sub>1</sub>}}, the seed values should be set to: <math display="block">\begin{align} \dot w_1 = \frac{\partial w_1}{\partial x_1} = \frac{\partial x_1}{\partial x_1} = 1 \\ \dot w_2 = \frac{\partial w_2}{\partial x_1} = \frac{\partial x_2}{\partial x_1} = 0 \end{align}</math> With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph. :{| class="wikitable" !Operations to compute value !!Operations to compute derivative |- |<math>w_1 = x_1</math> || <math>\dot w_1 = 1</math> (seed) |- |<math>w_2 = x_2</math> || <math>\dot w_2 = 0</math> (seed) |- |<math>w_3 = w_1 \cdot w_2</math> || <math>\dot w_3 = w_2 \cdot \dot w_1 + w_1 \cdot \dot w_2</math> |- |<math>w_4 = \sin w_1</math> || <math>\dot w_4 = \cos w_1 \cdot \dot w_1</math> |- |<math>w_5 = w_3 + w_4</math> || <math>\dot w_5 = \dot w_3 + \dot w_4</math> |} To compute the [[gradient]] of this example function, which requires not only <math>\tfrac{\partial y}{\partial x_1}</math> but also <math>\tfrac{\partial y}{\partial x_2}</math>, an ''additional'' sweep is performed over the computational graph using the seed values <math>\dot w_1 = 0; \dot w_2 = 1</math>. ==== Implementation ==== ===== Pseudocode ===== Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression ''Z'' to be derived with regard to a variable ''V''. The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated.<ref name=demm22>{{cite book|author = Maximilian E. Schüle, Maximilian Springer, [[Alfons Kemper]], [[Thomas Neumann]] |title=Proceedings of the Sixth Workshop on Data Management for End-To-End Machine Learning |chapter=LLVM code optimisation for automatic differentiation |date=2022|pages=1–4 |doi = 10.1145/3533028.3533302|isbn=9781450393751 |s2cid=248853034 |language=English}}</ref> <syntaxhighlight lang="cpp"> tuple<float,float> evaluateAndDerive(Expression Z, Variable V) { if isVariable(Z) if (Z = V) return {valueOf(Z), 1}; else return {valueOf(Z), 0}; else if (Z = A + B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a + b, a' + b'}; else if (Z = A - B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a - b, a' - b'}; else if (Z = A * B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a * b, b * a' + a * b'}; } </syntaxhighlight> ===== C++ ===== <syntaxhighlight lang="cpp"> #include <iostream> struct ValueAndPartial { float value, partial; }; struct Variable; struct Expression { virtual ValueAndPartial evaluateAndDerive(Variable *variable) = 0; }; struct Variable: public Expression { float value; Variable(float value): value(value) {} ValueAndPartial evaluateAndDerive(Variable *variable) { float partial = (this == variable) ? 1.0f : 0.0f; return {value, partial}; } }; struct Plus: public Expression { Expression *a, *b; Plus(Expression *a, Expression *b): a(a), b(b) {} ValueAndPartial evaluateAndDerive(Variable *variable) { auto [valueA, partialA] = a->evaluateAndDerive(variable); auto [valueB, partialB] = b->evaluateAndDerive(variable); return {valueA + valueB, partialA + partialB}; } }; struct Multiply: public Expression { Expression *a, *b; Multiply(Expression *a, Expression *b): a(a), b(b) {} ValueAndPartial evaluateAndDerive(Variable *variable) { auto [valueA, partialA] = a->evaluateAndDerive(variable); auto [valueB, partialB] = b->evaluateAndDerive(variable); return {valueA * valueB, valueB * partialA + valueA * partialB}; } }; int main () { // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Variable x(2), y(3); Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2); float xPartial = z.evaluateAndDerive(&x).partial; float yPartial = z.evaluateAndDerive(&y).partial; std::cout << "∂z/∂x = " << xPartial << ", " << "∂z/∂y = " << yPartial << std::endl; // Output: ∂z/∂x = 7, ∂z/∂y = 8 return 0; } </syntaxhighlight> === Reverse accumulation === [[File:AutoDiff.webp|thumb|Reverse accumulation]] In reverse accumulation AD, the ''dependent variable'' to be differentiated is fixed and the derivative is computed ''with respect to'' each sub-[[expression (mathematics)|expression]] recursively. In a pen-and-paper calculation, the derivative of the ''outer'' functions is repeatedly substituted in the chain rule: <math display="block">\begin{align} \frac{\partial y}{\partial x} &= \frac{\partial y}{\partial w_1} \frac{\partial w_1}{\partial x}\\ &= \left(\frac{\partial y}{\partial w_2} \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}\\ &= \left(\left(\frac{\partial y}{\partial w_3} \frac{\partial w_3}{\partial w_2}\right) \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}\\ &= \cdots \end{align}</math> In reverse accumulation, the quantity of interest is the ''adjoint'', denoted with a bar <math>\bar w_i</math>; it is a derivative of a chosen dependent variable with respect to a subexpression <math>w_i</math>: <math display="block">\bar w_i = \frac{\partial y}{\partial w_i}</math> Using the chain rule, if <math>w_i</math> has successors in the computational graph: :<math>\bar w_i = \sum_{j \in \{\text{successors of i}\}} \bar w_j \frac{\partial w_j}{\partial w_i}</math> Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only [[space–time tradeoff|half the work]] when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables {{math|''w''<sub>''i''</sub>}} as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list<ref>{{cite journal|last1=Bartholomew-Biggs|first1=Michael| last2=Brown|first2=Steven|last3=Christianson|first3=Bruce|last4=Dixon|first4=Laurence|date=2000|title=Automatic differentiation of algorithms|journal=Journal of Computational and Applied Mathematics| volume=124|issue=1–2|pages=171–190| doi=10.1016/S0377-0427(00)00422-2|bibcode=2000JCoAM.124..171B|hdl=2299/3010|hdl-access=free}}</ref> (however, Wengert published forward accumulation, not reverse accumulation<ref name="Wengert1964">{{cite journal|author=R.E. Wengert|title=A simple automatic derivative evaluation program|journal=Comm. ACM|volume=7 |issue=8|year=1964|pages=463–464|doi=10.1145/355586.364791|s2cid=24039274|doi-access=free}}</ref>), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as [[rematerialization]]. [[checkpointing scheme|Checkpointing]] is also used to save intermediary states. [[Image:ReverseaccumulationAD.png|right|thumb|300px|Figure 3: Example of reverse accumulation with computational graph]] The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order): {{block indent| ; Operations to compute derivative :<math>\bar w_5 = 1 \text{ (seed)}</math> :<math>\bar w_4 = \bar w_5 \cdot 1</math> :<math>\bar w_3 = \bar w_5 \cdot 1</math> :<math>\bar w_2 = \bar w_3 \cdot w_1</math> :<math>\bar w_1 = \bar w_3 \cdot w_2 + \bar w_4 \cdot \cos w_1</math> }} The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint;{{efn|In terms of weight matrices, the adjoint is the [[transpose]]. Addition is the [[covector]] <math>[1 \cdots 1]</math>, since <math>[1 \cdots 1]\left[\begin{smallmatrix}x_1 \\ \vdots \\ x_n \end{smallmatrix}\right] = x_1 + \cdots + x_n,</math> and fanout is the vector <math>\left[\begin{smallmatrix}1 \\ \vdots \\ 1 \end{smallmatrix}\right],</math> since <math>\left[\begin{smallmatrix}1 \\ \vdots \\ 1 \end{smallmatrix}\right][x] = \left[\begin{smallmatrix}x \\ \vdots \\ x \end{smallmatrix}\right].</math>}} a [[unary operation|unary]] function {{math|1=''y'' = ''f''(''x'')}} in the primal causes {{math|1=''x̄'' = ''ȳ'' ''f''′(''x'')}} in the adjoint; etc. ==== Implementation ==== ===== Pseudo code ===== Reverse accumulation requires two passes: In the forward pass, the function is evaluated first and the partial results are cached. In the reverse pass, the partial derivatives are calculated and the previously derived value is backpropagated. The corresponding method call expects the expression ''Z'' to be derived and ''seeded'' with the derived value of the parent expression. For the top expression, Z differentiated with respect to Z, this is 1. The method traverses the expression tree recursively until a variable is reached and adds the current ''seed'' value to the derivative expression.<ref name=ssdbm21>{{cite book|author= Maximilian E. Schüle, Harald Lang, Maximilian Springer, [[Alfons Kemper]], [[Thomas Neumann]], Stephan Günnemann|title=33rd International Conference on Scientific and Statistical Database Management |chapter=In-Database Machine Learning with SQL on GPUs |date=2021|pages=25–36 |doi = 10.1145/3468791.3468840|isbn=9781450384131 |s2cid=235386969 |language=English}}</ref><ref name=dpd>{{cite journal|author= Maximilian E. Schüle, Harald Lang, Maximilian Springer, [[Alfons Kemper]], [[Thomas Neumann]], Stephan Günnemann|title=Recursive SQL and GPU-support for in-database machine learning|journal=Distributed and Parallel Databases|date=2022|volume=40 |issue=2–3 |pages=205–259 |doi = 10.1007/s10619-022-07417-7|s2cid=250412395 |language=English|doi-access=free}}</ref> <syntaxhighlight lang="cpp"> void derive(Expression Z, float seed) { if isVariable(Z) partialDerivativeOf(Z) += seed; else if (Z = A + B) derive(A, seed); derive(B, seed); else if (Z = A - B) derive(A, seed); derive(B, -seed); else if (Z = A * B) derive(A, valueOf(B) * seed); derive(B, valueOf(A) * seed); } </syntaxhighlight> ===== C++ ===== <syntaxhighlight lang="cpp"> #include <iostream> struct Expression { float value; virtual void evaluate() = 0; virtual void derive(float seed) = 0; }; struct Variable: public Expression { float partial; Variable(float value) { this->value = value; partial = 0.0f; } void evaluate() {} void derive(float seed) { partial += seed; } }; struct Plus: public Expression { Expression *a, *b; Plus(Expression *a, Expression *b): a(a), b(b) {} void evaluate() { a->evaluate(); b->evaluate(); value = a->value + b->value; } void derive(float seed) { a->derive(seed); b->derive(seed); } }; struct Multiply: public Expression { Expression *a, *b; Multiply(Expression *a, Expression *b): a(a), b(b) {} void evaluate() { a->evaluate(); b->evaluate(); value = a->value * b->value; } void derive(float seed) { a->derive(b->value * seed); b->derive(a->value * seed); } }; int main () { // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Variable x(2), y(3); Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2); z.evaluate(); std::cout << "z = " << z.value << std::endl; // Output: z = 19 z.derive(1); std::cout << "∂z/∂x = " << x.partial << ", " << "∂z/∂y = " << y.partial << std::endl; // Output: ∂z/∂x = 7, ∂z/∂y = 8 return 0; } </syntaxhighlight> === Beyond forward and reverse accumulation === Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of {{math|''f'' : '''R'''<sup>''n''</sup> → '''R'''<sup>''m''</sup>}} with a minimum number of arithmetic operations is known as the ''optimal Jacobian accumulation'' (OJA) problem, which is [[NP-complete]].<ref>{{Cite journal|first=Uwe|last=Naumann|journal=Mathematical Programming|volume=112|issue=2|pages=427–441|date=April 2008|doi=10.1007/s10107-006-0042-z|title=Optimal Jacobian accumulation is NP-complete|citeseerx=10.1.1.320.5665|s2cid=30219572}}</ref> Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)