## Root Finding, Again

### March 21, 2014

We give our solution in Python, starting with the secant method. This method is not very useful in practice, because going outside the original bounds means that some steps actually retreat from the root instead of advancing toward it:

def secant(fun, x1, x2, args=(), xtol=1e-12, maxiter=1000):
"""the fastest (superlinear), but will often fail"""
a, b = float(x1), float(x2)
fa, fb = fun(a, *args), fun(b, *args)
for i in xrange(maxiter):
if fa == fb:
raise ZeroDivisionError("fa == fb in secant")
dx = -(a - b) / (1 - fb / fa)
x = a + dx
y = fun(x, *args)
if abs(dx) < xtol or y == 0.0:
return x
a, fa, b, fb = b, fb, x, y
raise ValueError("too many iterations")

This is essentially the same as the regula falsi method, but after every iteration the last two points are kept for the next iteration, instead of trying to keep a bracket on the root.

Now the Dekker method. Points x1 and x2 should bracket a root. If disp=True the iterates are printed and one can see if a bisection or a secant step is taking place.

def dekker(fun, x1, x2, args=(), xtol=1e-8, disp=False):
"""see wiki Brent
c is last iterate of b
a, b bracket solution abs(fb)
if f(s) has same sign as fa, then old b beomes new contrapoint
the solution is within 6*EPS + 2 * xtol
"""
a, b = float(x1), float(x2)
fa, fb = fun(a, *args), fun(b, *args)
if fa * fb > 0.0: raise RootNotBracketedException(a, b, fa, fb)
c, fc = a, fa
feval = 2
while 1:
if abs(fa) < abs(fb):
a, fa, b, fb, c, fc = b, fb, a, fa, b, fb
xm = (a - b) / 2 # bisection step
tol1 = 2 * EPS * abs(b) + xtol
if abs(xm) 0:
q = -q
p = abs(p)
d = p / q if p tol1 else __sign(tol1, xm))
fb = fun(b, *args)
feval += 1
if disp:
msg = "b" if abs(d) == abs(xm) else "s"
fmt = "{:3d} a={:7.4g} b={:12.9g} c={:7.4g} fb={:10.3g} xm={:8.2g} {:s}"
print fmt.format(feval, a, b, c, fb, xm, msg)
if fa * fb > 0:     # if new iterate has same sign as a - old b take s place of a
a, fa = c, fc

The __sign function returns x if y > 0, −x if y < 0, and 0 otherwise:

def __sign(x, y):
if y > 0.0 :
return x
elif y < 0.0:
return -x
else:
return 0.0

As a test, we compare the secant, regula falsi and dekker methods finding roots of the function cos(x) - x in the interval 0 to 1; the left column is the result and the right column is the time taken:

Secant       0.739085133215 0.000127772968474
Regula Falsi 0.739085133215 0.00897438331583
Dekker       0.739085133215 0.00184064914826

The secant method is very good when it works. Here are the intermediate steps of the Dekker algorithm:

3 a=      0 b= 0.685073357 c=      1 fb=    0.0893 xm=    -0.5 s
4 a=      1 b= 0.736298998 c= 0.6851 fb=   0.00466 xm=    0.16 s
5 a=      1 b= 0.739119362 c= 0.7363 fb= -5.73e-05 xm=    0.13 s
6 a= 0.7363 b= 0.739085112 c= 0.7391 fb=  3.53e-08 xm= -0.0014 s
7 a= 0.7391 b= 0.739085133 c= 0.7391 fb=  2.67e-13 xm= 1.7e-05 s
8 a= 0.7391 b= 0.739085133 c= 0.7391 fb=         0 xm= 1.7e-05 s

By comparison, here are the results when the three methods are run on a “dirty function” that has the value -1e-10 in (0,0.99) and then rises steeply to 1 at x=1:

Secant       fa == fb in secant
Regula Falsi too many iterations
Dekker       0.990000100000 0.000124694101764

The secant method fails because the two points have the same value, and regula falsi takes too long, but the Dekker method finds the root without difficulty.

You can run the program at http://programmingpraxis.codepad.org/Xe17Hcuf.

Pages: 1 2