Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linearity of differentiation
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Calculus property}} In [[calculus]], the [[derivative]] of any [[linear combination]] of [[function (mathematics)|function]]s equals the same linear combination of the derivatives of the functions;<ref>{{citation|title=Calculus: Single Variable, Volume 1|first1=Brian E.|last1=Blank|first2=Steven George|last2=Krantz|publisher=Springer|year=2006|isbn=9781931914598|page=177|url=https://books.google.com/books?id=hMY8lbX87Y8C&pg=PA177}}.</ref> this property is known as '''linearity of differentiation''', the '''rule of linearity''',<ref>{{citation|title=Calculus, Volume 1|first=Gilbert|last=Strang|publisher=SIAM|year=1991|isbn=9780961408824|pages=71β72|url=https://books.google.com/books?id=OisInC1zvEMC&pg=PA71}}.</ref> or the [[Superposition principle|superposition rule]] for differentiation.<ref>{{citation|title=Calculus Using Mathematica|first=K. D.|last=Stroyan|publisher=Academic Press|year=2014|isbn=9781483267975|page=89|url=https://books.google.com/books?id=C8DiBQAAQBAJ&pg=PA89}}.</ref> It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the [[sum rule in differentiation|sum rule]] (the derivative of the sum of two functions is the sum of the derivatives) and the [[constant factor rule in differentiation|constant factor rule]] (the derivative of a constant multiple of a function is the same constant multiple of the derivative).<ref>{{citation|title=Practical Analysis in One Variable|series=[[Undergraduate Texts in Mathematics]]|first=Donald|last=Estep|publisher=Springer|year=2002|isbn=9780387954844|pages=259β260|url=https://books.google.com/books?id=trC-jTRffesC&pg=PA259|contribution=20.1 Linear Combinations of Functions}}.</ref><ref>{{citation|title=Understanding Real Analysis|first=Paul|last=Zorn|publisher=CRC Press|year=2010|isbn=9781439894323|page=184|url=https://books.google.com/books?id=1WLNBQAAQBAJ&pg=PA184}}.</ref> Thus it can be said that differentiation is [[linear map|linear]], or the [[differential operator]] is a [[linear map|linear]] operator.<ref>{{citation|title=Finite-Dimensional Linear Algebra|series=Discrete Mathematics and Its Applications|first=Mark S.|last=Gockenbach|publisher=CRC Press|year=2011|isbn=9781439815649|page=103|url=https://books.google.com/books?id=xP0RFUHWQI0C&pg=PA103}}.</ref> ==Statement and derivation== Let {{math|''f''}} and {{math|''g''}} be functions, with {{math|''Ξ±''}} and {{math|''Ξ²''}} constants. Now consider :<math>\frac{\mbox{d}}{\mbox{d} x} ( \alpha \cdot f(x) + \beta \cdot g(x) ).</math> By the [[sum rule in differentiation]], this is :<math>\frac{\mbox{d}}{\mbox{d} x} ( \alpha \cdot f(x) ) + \frac{\mbox{d}}{\mbox{d} x} (\beta \cdot g(x)),</math> and by the [[constant factor rule in differentiation]], this reduces to :<math>\alpha \cdot f'(x) + \beta \cdot g'(x).</math> Therefore, :<math>\frac{\mbox{d}}{\mbox{d} x}(\alpha \cdot f(x) + \beta \cdot g(x)) = \alpha \cdot f'(x) + \beta \cdot g'(x).</math> Omitting the [[Bracket (mathematics)#Functions|bracket]]s, this is often written as: :<math>(\alpha \cdot f + \beta \cdot g)' = \alpha \cdot f'+ \beta \cdot g'.</math> ==Detailed proofs/derivations from definition== We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown. Proving linearity directly also proves the constant factor rule, the sum rule, and the difference rule as special cases. The sum rule is obtained by setting both constant coefficients to <math>1</math>. The difference rule is obtained by setting the first constant coefficient to <math>1</math> and the second constant coefficient to <math>-1</math>. The constant factor rule is obtained by setting either the second constant coefficient or the second function to <math>0</math>. (From a technical standpoint, the [[Domain of a function|domain]] of the second function must also be considered - one way to avoid issues is setting the second function equal to the first function and the second constant coefficient equal to <math>0</math>. One could also define both the second constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function, among other possibilities.) On the contrary, if we first prove the constant factor rule and the sum rule, we can prove linearity and the difference rule. Proving linearity is done by defining the first and second functions as being two other functions being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant factor rule, which will reach our conclusion for linearity. In order to prove the difference rule, the second function can be redefined as another function multiplied by the constant coefficient of <math>-1</math>. This would, when simplified, give us the difference rule for differentiation. In the proofs/derivations below,<ref>{{cite web |title=Differentiation Rules |url=https://courseware.cemc.uwaterloo.ca/11/assignments/47/6 |website=CEMC's Open Courseware |access-date=3 May 2022}}</ref><ref>{{cite web |last1=Dawkins |first1=Paul |title=Proof Of Various Derivative Properties |url=https://tutorial.math.lamar.edu/Classes/CalcI/DerivativeProofs.aspx |website=Paul's Online Notes |access-date=3 May 2022}}</ref> the coefficients <math>a, b</math> are used; they correspond to the coefficients <math>\alpha, \beta</math> above. ===Linearity (directly)=== Let <math>a, b \in \mathbb{R}</math>. Let <math>f, g</math> be functions. Let <math>j</math> be a function, where <math>j</math> is defined only where <math>f</math> and <math>g</math> are both defined. (In other words, the domain of <math>j</math> is the intersection of the domains of <math>f</math> and <math>g</math>.) Let <math>x</math> be in the domain of <math>j</math>. Let <math>j(x) = af(x) + bg(x)</math>. We want to prove that <math>j^{\prime}(x) = af^{\prime}(x) + bg^{\prime}(x)</math>. By definition, we can see that <math display="block">\begin{align} j^{\prime}(x) &= \lim_{h \rightarrow 0} \frac{j(x + h) - j(x)}{h} \\ &= \lim_{h \rightarrow 0} \frac{\left( af(x + h) + bg(x + h) \right) - \left( af(x) + bg(x) \right)}{h} \\ &= \lim_{h \rightarrow 0} \left( a\frac{f(x + h) - f(x)}{h} + b\frac{g(x + h) - g(x)}{h} \right) \\ \end{align}</math> In order to use the limits law for the sum of limits, we need to know that <math display="inline">\lim_{h \to 0} a\frac{f(x + h) - f(x)}{h}</math> and <math display="inline">\lim_{h \to 0} b\frac{g(x + h) - g(x)}{h}</math> both individually exist. For these smaller limits, we need to know that <math display="inline">\lim_{h \to 0} \frac{f(x + h) - f(x)}{h}</math> and <math display="inline">\lim_{h \to 0} \frac{g(x + h) - g(x)}{h}</math> both individually exist to use the coefficient law for limits. By definition, <math display="inline">f^{\prime}(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}</math> and <math display="inline">g^{\prime}(x) = \lim_{h \to 0} \frac{g(x + h) - g(x)}{h}</math>. So, if we know that <math>f^{\prime}(x)</math> and <math>g^{\prime}(x)</math> both exist, we will know that <math display="inline">\lim_{h \to 0} \frac{f(x + h) - f(x)}{h}</math> and <math display="inline">\lim_{h \to 0} \frac{g(x + h) - g(x)}{h}</math> both individually exist. This allows us to use the coefficient law for limits to write <math display="block"> \lim_{h \to 0} a\frac{f(x + h) - f(x)}{h} = a\lim_{h \to 0}\frac{f(x + h) - f(x)}{h} </math> and <math display="block"> \lim_{h \to 0} b\frac{g(x + h) - g(x)}{h} = b\lim_{h \to 0}\frac{g(x + h) - g(x)}{h}. </math> With this, we can go back to apply the limit law for the sum of limits, since we know that <math display="inline">\lim_{h \rightarrow 0} a\frac{f(x + h) - f(x)}{h}</math> and <math display="inline">\lim_{h \rightarrow 0} b\frac{g(x + h) - g(x)}{h}</math> both individually exist. From here, we can directly go back to the derivative we were working on.<math display="block">\begin{align} j^{\prime}(x) &= \lim_{h \rightarrow 0} \left( a\frac{f(x + h) - f(x)}{h} + b\frac{g(x + h) - g(x)}{h} \right) \\ &= \lim_{h \rightarrow 0} \left( a\frac{f(x + h) - f(x)}{h}\right) + \lim_{h \rightarrow 0} \left(b\frac{g(x + h) - g(x)}{h} \right) \\ &= a\lim_{h \rightarrow 0} \left( \frac{f(x + h) - f(x)}{h}\right) + b\lim_{h \rightarrow 0} \left(\frac{g(x + h) - g(x)}{h} \right) \\ &= af^{\prime}(x) + bg^{\prime}(x) \end{align}</math>Finally, we have shown what we claimed in the beginning: <math>j^{\prime}(x) = af^{\prime}(x) + bg^{\prime}(x)</math>. ===Sum=== Let <math>f, g</math> be functions. Let <math>j</math> be a function, where <math>j</math> is defined only where <math>f</math> and <math>g</math> are both defined. (In other words, the domain of <math>j</math> is the intersection of the domains of <math>f</math> and <math>g</math>.) Let <math>x</math> be in the domain of <math>j</math>. Let <math>j(x) = f(x) + g(x)</math>. We want to prove that <math>j^{\prime}(x) = f^{\prime}(x) + g^{\prime}(x)</math>. By definition, we can see that <math display="block">\begin{align} j^{\prime}(x) &= \lim_{h \rightarrow 0} \frac{j(x + h) - j(x)}{h} \\ &= \lim_{h \rightarrow 0} \frac{\left( f(x + h) + g(x + h) \right) - \left( f(x) + g(x) \right)}{h} \\ &= \lim_{h \rightarrow 0} \left( \frac{f(x + h) - f(x)}{h} + \frac{g(x + h) - g(x)}{h} \right) \\ \end{align}</math>In order to use the law for the sum of limits here, we need to show that the individual limits, <math display="inline">\lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}</math> and <math display="inline">\lim_{h \rightarrow 0} \frac{g(x + h) - g(x)}{h}</math> both exist. By definition, <math display="inline">f^{\prime}(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}</math>and <math display="inline">g^{\prime}(x) = \lim_{h \rightarrow 0} \frac{g(x + h) - g(x)}{h}</math>, so the limits exist whenever the derivatives <math>f^{\prime}(x)</math> and <math>g^{\prime}(x)</math> exist. So, assuming that the derivatives exist, we can continue the above derivation <math display="block">\begin{align} j^{\prime}(x) &= \lim_{h \rightarrow 0} \left( \frac{f(x + h) - f(x)}{h} + \frac{g(x + h) - g(x)}{h} \right) \\ &= \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} + \lim_{h \rightarrow 0} \frac{g(x + h) - g(x)}{h} \\ &= f^{\prime}(x) + g^{\prime}(x) \end{align}</math> Thus, we have shown what we wanted to show, that: <math>j^{\prime}(x) = f^{\prime}(x) + g^{\prime}(x)</math>. ===Difference=== Let <math>f, g</math> be functions. Let <math>j</math> be a function, where <math>j</math> is defined only where <math>f</math> and <math>g</math> are both defined. (In other words, the domain of <math>j</math> is the intersection of the domains of <math>f</math> and <math>g</math>.) Let <math>x</math> be in the domain of <math>j</math>. Let <math>j(x) = f(x) - g(x)</math>. We want to prove that <math>j^{\prime}(x) = f^{\prime}(x) - g^{\prime}(x)</math>. By definition, we can see that: <math display="block">\begin{align} j^{\prime}(x) &= \lim_{h \rightarrow 0} \frac{j(x + h) - j(x)}{h} \\ &= \lim_{h \rightarrow 0} \frac{\left( f(x + h) - (g(x + h) \right) - \left( f(x) - g(x) \right)}{h} \\ &= \lim_{h \rightarrow 0} \left( \frac{f(x + h) - f(x)}{h} - \frac{g(x + h) - g(x)}{h} \right) \\ \end{align}</math> In order to use the law for the difference of limits here, we need to show that the individual limits, <math display="inline">\lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}</math> and <math display="inline">\lim_{h \rightarrow 0} \frac{g(x + h) - g(x)}{h}</math> both exist. By definition, <math display="inline">f^{\prime}(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}</math> and that <math display="inline">g^{\prime}(x) = \lim_{h \rightarrow 0} \frac{g(x + h) - g(x)}{h}</math>, so these limits exist whenever the derivatives <math>f^{\prime}(x)</math> and <math>g^{\prime}(x)</math> exist. So, assuming that the derivatives exist, we can continue the above derivation <math display="block">\begin{align} j^{\prime}(x) &= \lim_{h \rightarrow 0} \left( \frac{f(x + h) - f(x)}{h} - \frac{g(x + h) - g(x)}{h} \right) \\ &= \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} - \lim_{h \rightarrow 0} \frac{g(x + h) - g(x)}{h} \\ &= f^{\prime}(x) - g^{\prime}(x) \end{align}</math> Thus, we have shown what we wanted to show, that: <math>j^{\prime}(x) = f^{\prime}(x) - g^{\prime}(x)</math>. ===Constant coefficient=== Let <math>f</math> be a function. Let <math>a \in \mathbb{R}</math>; <math>a</math> will be the constant coefficient. Let <math>j</math> be a function, where j is defined only where <math>f</math> is defined. (In other words, the domain of <math>j</math> is equal to the domain of <math>f</math>.) Let <math>x</math> be in the domain of <math>j</math>. Let <math>j(x) = af(x)</math>. We want to prove that <math> j^{\prime}(x) = af^{\prime}(x)</math>. By definition, we can see that: <math display="block">\begin{align} j^{\prime}(x) &= \lim_{h \rightarrow 0} \frac{j(x + h) - j(x)}{h} \\ &= \lim_{h \rightarrow 0} \frac{af(x + h) - af(x)}{h} \\ &= \lim_{h \rightarrow 0} a\frac{f(x + h) - f(x)}{h} \\ \end{align}</math> Now, in order to use a limit law for constant coefficients to show that <math display="block"> \lim_{h \rightarrow 0} a\frac{f(x + h) - f(x)}{h} = a\lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} </math> we need to show that <math display="inline">\lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}</math> exists. However, <math display="inline">f^{\prime}(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}</math>, by the definition of the derivative. So, if <math>f^{\prime}(x)</math> exists, then <math display="inline">\lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}</math> exists. Thus, if we assume that <math>f^{\prime}(x)</math> exists, we can use the limit law and continue our proof. <math display="block">\begin{align} j^{\prime}(x) &= \lim_{h \rightarrow 0} a\frac{f(x + h) - f(x)}{h} \\ &= a\lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} \\ &= af^{\prime}(x) \\ \end{align}</math> Thus, we have proven that when <math>j(x) = af(x)</math>, we have <math>j^{\prime}(x) = af^{\prime}(x)</math>. ==See also== * {{annotated link|Differentiation of integrals}} * {{annotated link|Differentiation of trigonometric functions}} * {{annotated link|Differentiation rules}} * {{annotated link|Distribution (mathematics)}} * {{annotated link|General Leibniz rule}} * {{annotated link|Integration by parts}} * {{annotated link|Inverse functions and differentiation}} * {{annotated link|Product rule}} * {{annotated link|Quotient rule}} * {{annotated link|Table of derivatives}} * {{annotated link|Vector calculus identities}} ==References== {{reflist}} {{Calculus topics}} [[Category:Articles containing proofs]] [[Category:Differential calculus]] [[Category:Differentiation rules]] [[Category:Theorems in mathematical analysis]] [[Category:Theorems in calculus]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Annotated link
(
edit
)
Template:Calculus topics
(
edit
)
Template:Citation
(
edit
)
Template:Cite web
(
edit
)
Template:Math
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)