Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Bisection method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== The method == The method is applicable for numerically solving the equation <math>f(x)=0</math> for the [[Real number|real]] variable <math>x</math>, where <math>f</math> is a [[continuous function]] defined on an interval <math>[a,b]</math> and where <math>f(a)</math> and <math>f(b)</math> have opposite signs. In this case <math>a</math> and <math>b</math> are said to bracket a root since, by the [[intermediate value theorem]], the continuous function <math>f</math> must have at least one root in the interval <math>(a,b)</math>. At each step the method divides the interval in two parts/halves by computing the midpoint <math>c = (a+b)/2</math> of the interval and the value of the function <math>f(c)</math> at that point. If <math>c</math> itself is a root then the process has succeeded and stops. Otherwise, there are now only two possibilities: either <math>f(a)</math> and <math>f(c)</math> have opposite signs and bracket a root, or <math>f(c)</math> and <math>f(b)</math> have opposite signs and bracket a root.<ref>If the function has the same sign at the endpoints of an interval, the endpoints may or may not bracket roots of the function.</ref> The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. In this way an interval that contains a zero of <math>f</math> is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small. Explicitly, if <math>f(c) = 0</math> then <math>c</math> may be taken as the solution and the process stops. Otherwise, if <math>f(a)</math> and <math>f(c)</math> have opposite signs, then the method sets <math>c</math> as the new value for <math>b</math>, and if <math>f(b)</math> and <math>f(c)</math> have opposite signs then the method sets <math>c</math> as the new <math>a</math>. In both cases, the new <math>f(a)</math> and <math>f(b)</math> have opposite signs, so the method is applicable to this smaller interval.<ref>{{Harvnb|Burden|Faires|2014|p=28}} for section</ref> === Stopping condition === The input for the method is a continuous function <math>f</math>, an interval <math>[a,b]</math>, and the function values <math>f(a)</math> and <math>f(b)</math>. The function values are of opposite sign (there is at least one zero crossing within the interval). Each iteration performs these steps: # Calculate <math>c</math>, the midpoint of the interval, :<math>\qquad c = \begin{cases} \tfrac{a+b}{2}, & \text{if }a\times b \leq 0 \\\,a+\tfrac{b-a}{2}, & \text{if }a\times b> 0 \end{cases}</math> # Calculate the function value at the midpoint, <math>f(c)</math>. # If convergence is satisfactory (see below), return <math>c</math> and stop iterating. # Examine the sign of <math>f(c)</math> and replace either <math>(a, f(a))</math> or <math>(b, f(b))</math> with <math>(c, f(c))</math> so that there is a zero crossing within the new interval. In order to determine when the iteration should stop, it is necessary to consider what is meant by the concept of 'tolerance' (<math>\epsilon</math>). Burden & Faires<ref>{{Harvnb|Burden|Faires|2014|p=50}}</ref> state: <blockquote>"we can select a tolerance <math>\epsilon > 0</math> and generate c<sub>1</sub>, ..., c<sub>N</sub> until one of the following conditions is met: {| |- ! scope=col style="width: 250px;" | ! scope=col style="width: 100px;" | ! scope=col style="width: 125px;" | |-style="text-align: right;" |<math>|c_N-c_{N-1}|<\epsilon,</math> |||| (2.1) |-style="text-align: right;" | <math>\left|\frac{c_N-c_{N-1}}{c_N}\right|<\epsilon,</math>||<math>c_N\ne 0, </math> or|| (2.2) |-style="text-align: right;" | <math>|f(c_N)|<\epsilon.</math>|||| (2.3) |} Unfortunately, difficulties can arise using any of these stopping criteria ... Without additional knowledge about <math>f</math> or <math>c</math>, inequality (2.2) is the best stopping criterion to apply because it comes closest to testing relative error." (Note: <math>c</math> has been used here as it is more common than Burden and Faire's <math>'p'</math>.) </blockquote> The objective is to find an approximation, within the tolerance, to the root. It can be seen that (2.3) <math>|f(c_N)|<\epsilon</math> does not give such an approximation unless the slope of the function at <math>c_N</math> is in the neighborhood of <math>\pm 1</math>. Suppose, for the purpose of illustration, the tolerance <math>\epsilon= 5\times10^{-7}</math>. Then, for a function such as <math>f(x)=10^{-m}*(x - 1)</math>, <math>|f(c)| = 10^{-m}|x - 1| < 5\times10^{-7}</math> so <math> |x - 1|<5\times10^{m-7}</math> This means that any number {{mvar|x}} in <math> [1-5\times10^{m-7}, 1+ 5\times 10^{m-7}]</math> would be a 'good' approximation to the root. If <math>m = 10</math>, the approximation to the root 1 would be in <math> [1-5000, 1+ 5000] = [-4999, 5001]</math>. -- a very poor result. As (2.3) does not appear to give acceptable results, (2.1) and (2.2) need to be evaluated. The following Python script compares the behavior for those two stopping conditions. <pre> def bisect(f, a, b, tolerance): fa = f(a) fb = f(b) i = 0 stop_a = [] stop_r = [] while True: i += 1 c = a + (b - a) / 2 fc = f(c) if c < 10: # For small root if not stop_a: print('{:3d} {:18.16f} {:18.16f} {:18.16e} | {:5.2e} {:5.2e}' .format(i, a, b, c, b - a, (b - a) / c)) else: # large root print('{:3d} {:18.16f} {:18.16f} {:18.16e} | ----- {:5.2e}' .format(i, a, b, c, b - a)) else: if not stop_r: print('{:3d} {:18.7f} {:18.7f} {:18.7e} | {:5.2e} {:5.2e}' .format(i, a, b, c, b - a, (b - a) / c)) else: print('{:3d} {:18.7f} {:18.7f} {:18.7e} | {:5.2e} ----- ' .format(i, a, b, c, b - a)) if fc == 0: return [c, i] if (b - a <= abs(c) * tolerance) & (stop_r == []): stop_r = [c, i] if (b - a <= tolerance) & (stop_a == []): stop_a = [c, i] if np.sign(fa) == np.sign(fc): a = c fa = fc else: b = c fb = fc if (stop_r != []) & (stop_a != []): return [stop_a, stop_r] </pre> The first function to be tested is one with a small root i.e. <math>f(x) = x - 0.00000000123456789 </math> <pre> print(' i a b c b - a (b - a)/c') f = lambda x: x - 0.00000000123456789 res = bisect(f, 0, 1, 5e-7) print('In {:2d} steps the absolute error case gives {:20.18F}'.format(res[0][1], res[0][0])) print('In {:2d} steps the relative error case gives {:20.18F}'.format(res[1][1], res[1][0])) print(' as the approximation to 0.00000000123456789') </pre> <pre> i a b c b - a (b - a)/c 1 0.0000000000000000 1.0000000000000000 5.0000000000000000e-01 | 1.00e+00 2.00e+00 2 0.0000000000000000 0.5000000000000000 2.5000000000000000e-01 | 5.00e-01 2.00e+00 3 0.0000000000000000 0.2500000000000000 1.2500000000000000e-01 | 2.50e-01 2.00e+00 4 0.0000000000000000 0.1250000000000000 6.2500000000000000e-02 | 1.25e-01 2.00e+00 5 0.0000000000000000 0.0625000000000000 3.1250000000000000e-02 | 6.25e-02 2.00e+00 6 0.0000000000000000 0.0312500000000000 1.5625000000000000e-02 | 3.12e-02 2.00e+00 7 0.0000000000000000 0.0156250000000000 7.8125000000000000e-03 | 1.56e-02 2.00e+00 8 0.0000000000000000 0.0078125000000000 3.9062500000000000e-03 | 7.81e-03 2.00e+00 9 0.0000000000000000 0.0039062500000000 1.9531250000000000e-03 | 3.91e-03 2.00e+00 10 0.0000000000000000 0.0019531250000000 9.7656250000000000e-04 | 1.95e-03 2.00e+00 11 0.0000000000000000 0.0009765625000000 4.8828125000000000e-04 | 9.77e-04 2.00e+00 12 0.0000000000000000 0.0004882812500000 2.4414062500000000e-04 | 4.88e-04 2.00e+00 13 0.0000000000000000 0.0002441406250000 1.2207031250000000e-04 | 2.44e-04 2.00e+00 14 0.0000000000000000 0.0001220703125000 6.1035156250000000e-05 | 1.22e-04 2.00e+00 15 0.0000000000000000 0.0000610351562500 3.0517578125000000e-05 | 6.10e-05 2.00e+00 16 0.0000000000000000 0.0000305175781250 1.5258789062500000e-05 | 3.05e-05 2.00e+00 17 0.0000000000000000 0.0000152587890625 7.6293945312500000e-06 | 1.53e-05 2.00e+00 18 0.0000000000000000 0.0000076293945312 3.8146972656250000e-06 | 7.63e-06 2.00e+00 19 0.0000000000000000 0.0000038146972656 1.9073486328125000e-06 | 3.81e-06 2.00e+00 20 0.0000000000000000 0.0000019073486328 9.5367431640625000e-07 | 1.91e-06 2.00e+00 21 0.0000000000000000 0.0000009536743164 4.7683715820312500e-07 | 9.54e-07 2.00e+00 22 0.0000000000000000 0.0000004768371582 2.3841857910156250e-07 | 4.77e-07 2.00e+00 23 0.0000000000000000 0.0000002384185791 1.1920928955078125e-07 | ----- 2.38e-07 24 0.0000000000000000 0.0000001192092896 5.9604644775390625e-08 | ----- 1.19e-07 25 0.0000000000000000 0.0000000596046448 2.9802322387695312e-08 | ----- 5.96e-08 26 0.0000000000000000 0.0000000298023224 1.4901161193847656e-08 | ----- 2.98e-08 27 0.0000000000000000 0.0000000149011612 7.4505805969238281e-09 | ----- 1.49e-08 28 0.0000000000000000 0.0000000074505806 3.7252902984619141e-09 | ----- 7.45e-09 29 0.0000000000000000 0.0000000037252903 1.8626451492309570e-09 | ----- 3.73e-09 30 0.0000000000000000 0.0000000018626451 9.3132257461547852e-10 | ----- 1.86e-09 31 0.0000000009313226 0.0000000018626451 1.3969838619232178e-09 | ----- 9.31e-10 32 0.0000000009313226 0.0000000013969839 1.1641532182693481e-09 | ----- 4.66e-10 33 0.0000000011641532 0.0000000013969839 1.2805685400962830e-09 | ----- 2.33e-10 34 0.0000000011641532 0.0000000012805685 1.2223608791828156e-09 | ----- 1.16e-10 35 0.0000000012223609 0.0000000012805685 1.2514647096395493e-09 | ----- 5.82e-11 36 0.0000000012223609 0.0000000012514647 1.2369127944111824e-09 | ----- 2.91e-11 37 0.0000000012223609 0.0000000012369128 1.2296368367969990e-09 | ----- 1.46e-11 38 0.0000000012296368 0.0000000012369128 1.2332748156040907e-09 | ----- 7.28e-12 39 0.0000000012332748 0.0000000012369128 1.2350938050076365e-09 | ----- 3.64e-12 40 0.0000000012332748 0.0000000012350938 1.2341843103058636e-09 | ----- 1.82e-12 41 0.0000000012341843 0.0000000012350938 1.2346390576567501e-09 | ----- 9.09e-13 42 0.0000000012341843 0.0000000012346391 1.2344116839813069e-09 | ----- 4.55e-13 43 0.0000000012344117 0.0000000012346391 1.2345253708190285e-09 | ----- 2.27e-13 44 0.0000000012345254 0.0000000012346391 1.2345822142378893e-09 | ----- 1.14e-13 45 0.0000000012345254 0.0000000012345822 1.2345537925284589e-09 | ----- 5.68e-14 46 0.0000000012345538 0.0000000012345822 1.2345680033831741e-09 | ----- 2.84e-14 47 0.0000000012345538 0.0000000012345680 1.2345608979558165e-09 | ----- 1.42e-14 48 0.0000000012345609 0.0000000012345680 1.2345644506694953e-09 | ----- 7.11e-15 49 0.0000000012345645 0.0000000012345680 1.2345662270263347e-09 | ----- 3.55e-15 50 0.0000000012345662 0.0000000012345680 1.2345671152047544e-09 | ----- 1.78e-15 51 0.0000000012345671 0.0000000012345680 1.2345675592939642e-09 | ----- 8.88e-16 52 0.0000000012345676 0.0000000012345680 1.2345677813385691e-09 | ----- 4.44e-16 In 22 steps the absolute error case gives 0.000000238418579102 In 52 steps the relative error case gives 0.000000001234567781 as the approximation to 0.00000000123456789 </pre> The reason that the absolute difference method gives such a poor result is that it measures ''''''decimal places'''''' of accuracy - but those decimal places may contain only 0's so have no useful information. That means that the 6 zeros after the decimal point in 0.000000238418579102 match the first 6 in 0.00000000123456789 so the absolute difference is less than <math>\epsilon= 5\times10^{-7}</math>. On the other hand, the relative difference method measures ''''''significant digits'''''' and represents a much better approximation to the position of the root. The next example is <pre>print(' i a b c b - a (b - a)/c') res = bisect(fun, 1234550, 1234581, 5e-7) print('In %2d steps the absolute error case gives %20.18F' % (res[0][1], res[0][0])) print('In %2d steps the relative error case gives %20.18F' % (res[1][1], res[1][0])) print(' as the approximation to 1234567.89012456789') </pre> <pre> i a b c b - a (b - a)/c 1 1234550.0000000 1234581.0000000 1.2345655e+06 | 3.10e+01 2.51e-05 2 1234565.5000000 1234581.0000000 1.2345732e+06 | 1.55e+01 1.26e-05 3 1234565.5000000 1234573.2500000 1.2345694e+06 | 7.75e+00 6.28e-06 4 1234565.5000000 1234569.3750000 1.2345674e+06 | 3.88e+00 3.14e-06 5 1234567.4375000 1234569.3750000 1.2345684e+06 | 1.94e+00 1.57e-06 6 1234567.4375000 1234568.4062500 1.2345679e+06 | 9.69e-01 7.85e-07 7 1234567.4375000 1234567.9218750 1.2345677e+06 | 4.84e-01 3.92e-07 8 1234567.6796875 1234567.9218750 1.2345678e+06 | 2.42e-01 ----- 9 1234567.8007812 1234567.9218750 1.2345679e+06 | 1.21e-01 ----- 10 1234567.8613281 1234567.9218750 1.2345679e+06 | 6.05e-02 ----- 11 1234567.8613281 1234567.8916016 1.2345679e+06 | 3.03e-02 ----- 12 1234567.8764648 1234567.8916016 1.2345679e+06 | 1.51e-02 ----- 13 1234567.8840332 1234567.8916016 1.2345679e+06 | 7.57e-03 ----- 14 1234567.8878174 1234567.8916016 1.2345679e+06 | 3.78e-03 ----- 15 1234567.8897095 1234567.8916016 1.2345679e+06 | 1.89e-03 ----- 16 1234567.8897095 1234567.8906555 1.2345679e+06 | 9.46e-04 ----- 17 1234567.8897095 1234567.8901825 1.2345679e+06 | 4.73e-04 ----- 18 1234567.8899460 1234567.8901825 1.2345679e+06 | 2.37e-04 ----- 19 1234567.8900642 1234567.8901825 1.2345679e+06 | 1.18e-04 ----- 20 1234567.8901234 1234567.8901825 1.2345679e+06 | 5.91e-05 ----- 21 1234567.8901234 1234567.8901529 1.2345679e+06 | 2.96e-05 ----- 22 1234567.8901234 1234567.8901381 1.2345679e+06 | 1.48e-05 ----- 23 1234567.8901234 1234567.8901308 1.2345679e+06 | 7.39e-06 ----- 24 1234567.8901234 1234567.8901271 1.2345679e+06 | 3.70e-06 ----- 25 1234567.8901234 1234567.8901252 1.2345679e+06 | 1.85e-06 ----- 26 1234567.8901243 1234567.8901252 1.2345679e+06 | 9.24e-07 ----- 27 1234567.8901243 1234567.8901248 1.2345679e+06 | 4.62e-07 ----- In 27 steps the absolute error case gives 1234567.890124522149562836 In 7 steps the relative error case gives 1234567.679687500000000000 as the approximation to 1234567.89012456789 </pre> In this case, the absolute difference tries to get 6 '''decimal places''' even though there are 7 digits before the decimal point. The relative difference gives 7 significant digits - all before the decimal point. These two examples show that the relative difference method produces much more satisfactory results than does the absolute difference method. A common idea used in algorithms for the bisection method is to do a computation to predetermine the number of steps required to achieve a desired accuracy. This is done by noting that, after <math>n</math> bisections, the maximum difference between the root and the approximation is :<math>|c_n-c|\le\frac{|b-a|}{2^n} < \epsilon.</math> This formula has been used to determine, in advance, an upper bound on the number of iterations that the bisection method needs to converge to a root within a certain number of '''decimal places'''. The number ''n'' of iterations needed to achieve such a required tolerance Ξ΅ is bounded by :<math>n \le \left\lceil\log_2\left(\frac{b-a}{\epsilon}\right)\right\rceil </math> The problem is that the number of iterations is determined by using the absolute difference method and hence should not be applied. An alternative approach has been suggested by MIT: http://web.mit.edu/10.001/Web/Tips/Converge.htm <blockquote>'''Convergence Tests, RTOL and ATOL''' Tolerances are usually specified as either a relative tolerance RTOL or an absolute tolerance ATOL, or both. The user typically desires that | True value -- Computed value | < RTOL*|True Value| + ATOL (Eq.1) where the RTOL controls the number of significant figures in the computed value (a float or a double), and a small ATOL is a just a "safety net" for the case where True Value is close to zero. (What would happen if ATOL = 0 and True Value = 0? Would the convergence test ever be satisfied?) You should write your programs to take both RTOL and ATOL as inputs." </blockquote> If the 'True Value' is large, then the 'RTOL' term will control the error so this would help in that case. If the 'True Value' is small, then the error will be controlled by ATOL - this will make things worse. The question is asked "(What would happen if ATOL = 0 and True Value = 0?. Would the convergence test ever be satisfied?)"- but no attempt is made to answer it. The answer to this question will follow.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)