Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linear approximation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Definition== Given a twice continuously differentiable function <math>f</math> of one [[real number|real]] variable, [[Taylor's theorem]] for the case <math>n = 1 </math> states that <math display="block"> f(x) = f(a) + f'(a)(x - a) + R_2 </math> where <math>R_2</math> is the remainder term. The linear approximation is obtained by dropping the remainder: <math display="block"> f(x) \approx f(a) + f'(a)(x - a).</math> This is a good approximation when <math>x</math> is close enough to {{nowrap|<math>a</math>;}} since a curve, when closely observed, will begin to resemble a straight line. Therefore, the expression on the right-hand side is just the equation for the [[tangent line]] to the graph of <math>f</math> at <math>(a,f(a))</math>. For this reason, this process is also called the '''tangent line approximation'''. Linear approximations in this case are further improved when the [[second derivative]] of a, <math> f''(a) </math>, is sufficiently small (close to zero) (i.e., at or near an [[inflection point]]). If <math>f</math> is [[concave down]] in the interval between <math>x</math> and <math>a</math>, the approximation will be an overestimate (since the derivative is decreasing in that interval). If <math>f</math> is [[concave up]], the approximation will be an underestimate.<ref>{{cite web |title=12.1 Estimating a Function Value Using the Linear Approximation |url=http://math.mit.edu/classes/18.013A/HTML/chapter12/section01.html |access-date=3 June 2012 |archive-date=3 March 2013 |archive-url=https://web.archive.org/web/20130303014028/http://math.mit.edu/classes/18.013A/HTML/chapter12/section01.html |url-status=dead }}</ref> Linear approximations for [[vector (geometric)|vector]] functions of a vector variable are obtained in the same way, with the derivative at a point replaced by the [[Jacobian matrix and determinant|Jacobian]] matrix. For example, given a differentiable function <math>f(x, y)</math> with real values, one can approximate <math>f(x, y)</math> for <math>(x, y)</math> close to <math>(a, b)</math> by the formula <math display="block">f\left(x,y\right)\approx f\left(a,b\right) + \frac{\partial f}{\partial x} \left(a,b\right)\left(x-a\right) + \frac{\partial f}{\partial y} \left(a,b\right)\left(y-b\right).</math> The right-hand side is the equation of the plane tangent to the graph of <math>z=f(x, y)</math> at <math>(a, b).</math> In the more general case of [[Banach space]]s, one has <math display="block"> f(x) \approx f(a) + Df(a)(x - a)</math> where <math>Df(a)</math> is the [[Fréchet derivative]] of <math>f</math> at <math>a</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)