Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Calculus
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Principles == === Limits and infinitesimals === {{Main|Limit of a function|Infinitesimal}} Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by [[infinitesimal]]s. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive [[real number]]. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols <math>dx</math> and <math>dy</math> were taken to be infinitesimal, and the derivative <math>dy/dx</math> was their ratio.<ref name="Bell-SEP" /> The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the [[epsilon, delta]] approach to [[Limit of a function|limits]]. Limits describe the behavior of a [[function (mathematics)|function]] at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the [[real number|real number system]] (as a [[metric space]] with the [[least-upper-bound property]]). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of [[non-standard analysis]] and [[smooth infinitesimal analysis]], which provided solid foundations for the manipulation of infinitesimals.<ref name="Bell-SEP"/> === Differential calculus === {{Main|Differential calculus}} [[File:Tangent line to a curve.svg|thumb|upright=1.35 |Tangent line at {{math|(''x''<sub>0</sub>, ''f''(''x''<sub>0</sub>))}}. The derivative {{math|''f′''(''x'')}} of a curve at a point is the slope (rise over run) of the line tangent to that curve at that point.]] Differential calculus is the study of the definition, properties, and applications of the [[derivative]] of a function. The process of finding the derivative is called ''differentiation''. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the ''derivative function'' or just the ''derivative'' of the original function. In formal terms, the derivative is a [[linear operator]] which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.<ref name="TMU">{{Cite book |last1=Frautschi |first1=Steven C. |title=The Mechanical Universe: Mechanics and Heat |title-link=The Mechanical Universe |last2=Olenick |first2=Richard P. |last3=Apostol |first3=Tom M. |last4=Goodstein |first4=David L. |date=2007 |publisher=Cambridge University Press |isbn=978-0-521-71590-4 |edition=Advanced |location=Cambridge [Cambridgeshire] |oclc=227002144 |author-link=Steven Frautschi |author-link3=Tom M. Apostol |author-link4=David L. Goodstein}}</ref>{{Rp|32}} In more explicit terms the "doubling function" may be denoted by {{math|''g''(''x'') {{=}} 2''x''}} and the "squaring function" by {{math|''f''(''x'') {{=}} ''x''<sup>2</sup>}}. The "derivative" now takes the function {{math|''f''(''x'')}}, defined by the expression "{{math|''x''<sup>2</sup>}}", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function {{math|''g''(''x'') {{=}} 2''x''}}, as will turn out. In [[Lagrange's notation]], the symbol for a derivative is an [[apostrophe]]-like mark called a [[prime (symbol)|prime]]. Thus, the derivative of a function called {{math|''f''}} is denoted by {{math|''f′''}}, pronounced "f prime" or "f dash". For instance, if {{math|''f''(''x'') {{=}} ''x''<sup>2</sup>}} is the squaring function, then {{math|''f′''(''x'') {{=}} 2''x''}} is its derivative (the doubling function {{math|''g''}} from above). If the input of the function represents time, then the derivative represents change concerning time. For example, if {{math|''f''}} is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of {{math|''f''}} is how the position is changing in time, that is, it is the [[velocity]] of the ball.<ref name="TMU"/>{{Rp|18–20}} If a function is [[linear function|linear]] (that is if the [[Graph of a function|graph]] of the function is a straight line), then the function can be written as {{math|''y'' {{=}} ''mx'' + ''b''}}, where {{math|''x''}} is the independent variable, {{math|''y''}} is the dependent variable, {{math|''b''}} is the ''y''-intercept, and: :<math>m= \frac{\text{rise}}{\text{run}}= \frac{\text{change in } y}{\text{change in } x} = \frac{\Delta y}{\Delta x}.</math> This gives an exact value for the slope of a straight line.<ref name=":4">{{Cite book |last1=Salas |first1=Saturnino L. |title=Calculus; one and several variables |last2=Hille |first2=Einar |date=1971 |publisher=Xerox College Pub. |location=Waltham, MA |oclc=135567}}</ref>{{Rp|page=6}} If the graph of the function is not a straight line, however, then the change in {{math|''y''}} divided by the change in {{math|''x''}} varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let {{math|''f''}} be a function, and fix a point {{math|''a''}} in the domain of {{math|''f''}}. {{math|(''a'', ''f''(''a''))}} is a point on the graph of the function. If {{math|''h''}} is a number close to zero, then {{math|''a'' + ''h''}} is a number close to {{math|''a''}}. Therefore, {{math|(''a'' + ''h'', ''f''(''a'' + ''h''))}} is close to {{math|(''a'', ''f''(''a''))}}. The slope between these two points is :<math>m = \frac{f(a+h) - f(a)}{(a+h) - a} = \frac{f(a+h) - f(a)}{h}.</math> This expression is called a ''[[difference quotient]]''. A line through two points on a curve is called a ''secant line'', so {{math|''m''}} is the slope of the secant line between {{math|(''a'', ''f''(''a''))}} and {{math|(''a'' + ''h'', ''f''(''a'' + ''h''))}}. The second line is only an approximation to the behavior of the function at the point {{math|'' a''}} because it does not account for what happens between {{math|'' a''}} and {{math|'' a'' + ''h''}}. It is not possible to discover the behavior at {{math|'' a''}} by setting {{math|''h''}} to zero because this would require [[dividing by zero]], which is undefined. The derivative is defined by taking the [[limit (mathematics)|limit]] as {{math|''h''}} tends to zero, meaning that it considers the behavior of {{math|''f''}} for all small values of {{math|''h''}} and extracts a consistent value for the case when {{math|''h''}} equals zero: :<math>\lim_{h \to 0}{f(a+h) - f(a)\over{h}}.</math> Geometrically, the derivative is the slope of the [[tangent line]] to the graph of {{math|''f''}} at {{math|''a''}}. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function {{math|''f''}}.<ref name=":4" />{{Rp|pages=61–63}} Here is a particular example, the derivative of the squaring function at the input 3. Let {{math|''f''(''x'') {{=}} ''x''<sup>2</sup>}} be the squaring function. [[File: Sec2tan.gif|thumb|upright=1.35|The derivative {{math|''f′''(''x'')}} of a curve at a point is the slope of the line tangent to that curve at that point. This slope is determined by considering the limiting value of the slopes of the second lines. Here the function involved (drawn in red) is {{math|''f''(''x'') {{=}} ''x''<sup>3</sup> − ''x''}}. The tangent line (in green) which passes through the point {{nowrap|(−3/2, −15/8)}} has a slope of 23/4. The vertical and horizontal scales in this image are different.]] :<math>\begin{align}f'(3) &=\lim_{h \to 0}{(3+h)^2 - 3^2\over{h}} \\ &=\lim_{h \to 0}{9 + 6h + h^2 - 9\over{h}} \\ &=\lim_{h \to 0}{6h + h^2\over{h}} \\ &=\lim_{h \to 0} (6 + h) \\ &= 6 \end{align} </math> The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the ''derivative function'' of the squaring function or just the ''derivative'' of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.<ref name=":4" />{{Rp|page=63}} === Leibniz notation === {{Main|Leibniz's notation}} A common notation, introduced by Leibniz, for the derivative in the example above is :<math> \begin{align} y&=x^2 \\ \frac{dy}{dx}&=2x. \end{align} </math> In an approach based on limits, the symbol {{math|{{sfrac|''dy''|'' dx''}}}} is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above.<ref name=":4" />{{Rp|page=74}} Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, {{math|''dy''}} being the infinitesimally small change in {{math|''y''}} caused by an infinitesimally small change {{math|'' dx''}} applied to {{math|''x''}}. We can also think of {{math|{{sfrac|''d''|'' dx''}}}} as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example: :<math> \frac{d}{dx}(x^2)=2x. </math> In this usage, the {{math|''dx''}} in the denominator is read as "with respect to {{math|''x''}}".<ref name=":4" />{{Rp|page=79}} Another example of correct notation could be: :<math>\begin{align} g(t) &= t^2 + 2t + 4 \\ {d \over dt}g(t) &= 2t + 2 \end{align} </math> Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like {{math|'' dx''}} and {{math|''dy''}} as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the [[total derivative]]. === Integral calculus === {{Main|Integral}} {{multiple image| total_width = 300px | direction = vertical | image1 = Integral as region under curve.svg | caption1 = Integration can be thought of as measuring the area under a curve, defined by {{math|''f''(''x'')}}, between two points (here {{math|'' a''}} and {{math|''b''}}). | image2 = Riemann integral regular.gif | caption2 = A sequence of midpoint Riemann sums over a regular partition of an interval: the total area of the rectangles converges to the integral of the function. }} ''Integral calculus'' is the study of the definitions, properties, and applications of two related concepts, the ''indefinite integral'' and the ''definite integral''. The process of finding the value of an integral is called ''integration''.<ref name=":5">{{Cite book |last1=Herman |first1=Edwin |url=https://openstax.org/details/books/calculus-volume-1 |title=Calculus |volume=1 |last2=Strang |first2=Gilbert |date=2017 |publisher=OpenStax |isbn=978-1-938168-02-4 |location=Houston, Texas |oclc=1022848630 |display-authors=etal |author-link2=Gilbert Strang |access-date=26 July 2022 |archive-date=23 September 2022 |archive-url=https://web.archive.org/web/20220923230919/https://openstax.org/details/books/calculus-volume-1 |url-status=live }}</ref>{{Rp|page=508}} The indefinite integral, also known as the ''[[antiderivative]]'', is the inverse operation to the derivative.<ref name=":4" />{{Rp|pages=163–165}} {{math|''F''}} is an indefinite integral of {{math|''f''}} when {{math|''f''}} is a derivative of {{math|''F''}}. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the [[x-axis]]. The technical definition of the definite integral involves the [[limit (mathematics)|limit]] of a sum of areas of rectangles, called a [[Riemann sum]].<ref name=":2">{{Cite book |last1=Hughes-Hallett |first1=Deborah |title=Calculus: Single and Multivariable |last2=McCallum |first2=William G. |last3=Gleason |first3=Andrew M. |last4=Connally |first4=Eric |date=2013 |publisher=Wiley |isbn=978-0-470-88861-2 |edition=6th |location=Hoboken, NJ |oclc=794034942 |display-authors=3 |author-link=Deborah Hughes Hallett |author-link2=William G. McCallum|author-link3=Andrew M. Gleason}}</ref>{{Rp|page=282}} A motivating example is the distance traveled in a given time.<ref name=":4" />{{Rp|pages=153}} If the speed is constant, only multiplication is needed: :<math>\mathrm{Distance} = \mathrm{Speed} \cdot \mathrm{Time}</math> But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a [[Riemann sum]]) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled. When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50 mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve.<ref name=":5"/>{{rp|535}} This connection between the area under a curve and the distance traveled can be extended to ''any'' irregularly shaped region exhibiting a fluctuating velocity over a given period. If {{math|''f''(''x'')}} represents speed as it varies over time, the distance traveled between the times represented by {{math|'' a''}} and {{math|''b''}} is the area of the region between {{math|''f''(''x'')}} and the {{math|''x''}}-axis, between {{math|''x'' {{=}} ''a''}} and {{math|''x'' {{=}} ''b''}}. To approximate that area, an intuitive method would be to divide up the distance between {{math|'' a''}} and {{math|''b''}} into several equal segments, the length of each segment represented by the symbol {{math|Δ''x''}}. For each small segment, we can choose one value of the function {{math|''f''(''x'')}}. Call that value {{math|''h''}}. Then the area of the rectangle with base {{math|Δ''x''}} and height {{math|''h''}} gives the distance (time {{math|Δ''x''}} multiplied by speed {{math|''h''}}) traveled in that segment. Associated with each segment is the average value of the function above it, {{math|''f''(''x'') {{=}} ''h''}}. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for {{math|Δ''x''}} will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as {{math|Δ''x''}} approaches zero.<ref name=":5"/>{{rp|512–522}} The symbol of integration is <math>\int </math>, an [[long s|elongated ''S'']] chosen to suggest summation.<ref name=":5" />{{Rp|pages=529}} The definite integral is written as: :<math>\int_a^b f(x)\, dx</math> and is read "the integral from ''a'' to ''b'' of ''f''-of-''x'' with respect to ''x''." The Leibniz notation {{math|''dx''}} is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width {{math|Δ''x''}} becomes the infinitesimally small {{math|''dx''}}.<ref name="TMU"/>{{Rp|44}} The indefinite integral, or antiderivative, is written: :<math>\int f(x)\, dx.</math> Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant.<ref name=":2" />{{Rp|page=326}} Since the derivative of the function {{math|''y'' {{=}} ''x''<sup>2</sup> + ''C''}}, where {{math|''C''}} is any constant, is {{math|''y′'' {{=}} 2''x''}}, the antiderivative of the latter is given by: :<math>\int 2x\, dx = x^2 + C.</math> The unspecified constant {{math|''C''}} present in the indefinite integral or antiderivative is known as the [[constant of integration]].<ref>{{cite book|first1=William |last1=Moebs |first2=Samuel J. |last2=Ling |first3=Jeff |last3=Sanny |display-authors=etal |title=University Physics, Volume 1 |publisher=OpenStax |year=2022 |isbn=978-1-947172-20-3 |oclc=961352944}}</ref>{{rp|135}} === Fundamental theorem === {{Main|Fundamental theorem of calculus}} The [[fundamental theorem of calculus]] states that differentiation and integration are inverse operations.<ref name=":2" />{{Rp|page=290}} More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration. The fundamental theorem of calculus states: If a function {{math|''f''}} is [[continuous function|continuous]] on the interval {{math|[''a'', ''b'']}} and if {{math|''F''}} is a function whose derivative is {{math|''f''}} on the interval {{math|(''a'', ''b'')}}, then :<math>\int_{a}^{b} f(x)\,dx = F(b) - F(a).</math> Furthermore, for every {{math|''x''}} in the interval {{math|(''a'', ''b'')}}, :<math>\frac{d}{dx}\int_a^x f(t)\, dt = f(x).</math> This realization, made by both [[Isaac Newton|Newton]] and [[Gottfried Leibniz|Leibniz]], was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of [[Isaac Barrow]], is difficult to determine because of the priority dispute between them.<ref>See, for example: * {{cite book|last=Mahoney |first=Michael S. |year=1990 |chapter=Barrow's mathematics: Between ancients and moderns |title=Before Newton |editor-first=M. |editor-last=Feingold |pages=179–249 |publisher=Cambridge University Press |isbn=978-0-521-06385-2}} * {{Cite journal |first=M. |last=Feingold |date=June 1993 |title=Newton, Leibniz, and Barrow Too: An Attempt at a Reinterpretation |journal=[[Isis (journal)|Isis]] |language=en |volume=84 |issue=2 |pages=310–338 |doi=10.1086/356464 |bibcode=1993Isis...84..310F |s2cid=144019197 |issn=0021-1753}} * {{cite book|first=Siegmund |last=Probst |chapter=Leibniz as Reader and Second Inventor: The Cases of Barrow and Mengoli |title=G.W. Leibniz, Interrelations Between Mathematics and Philosophy|editor-first1=Norma B. |editor-last1=Goethe |editor-first2=Philip |editor-last2=Beeley |editor-first3=David |editor-last3=Rabouin |publisher=Springer |isbn=978-9-401-79663-7 |pages=111–134 |year=2015 |series=Archimedes: New Studies in the History and Philosophy of Science and Technology |volume=41}}</ref>) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for [[antiderivative]]s. It is also a prototype solution of a [[differential equation]]. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.<ref>{{Cite book |last1=Herman |first1=Edwin |url=https://openstax.org/details/books/calculus-volume-2 |title=Calculus. Volume 2 |last2=Strang |first2=Gilbert |date=2017 |publisher=OpenStax |isbn=978-1-5066-9807-6 |location=Houston |oclc=1127050110 |display-authors=etal |access-date=26 July 2022 |archive-date=26 July 2022 |archive-url=https://web.archive.org/web/20220726140351/https://openstax.org/details/books/calculus-volume-2 |url-status=live }}</ref>{{Rp|pages=351–352}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)