Look And Say, Revisited
March 28, 2011
Our solution uses two techniques of mathematics: Horner’s rule and bisection. First, here are the coefficients of the polynomial that computes Conway’s constant:
(define cs '(-6 3 -6 12 -4 7 -7 1 0 5 -2 -4 -12 2 7
12 -7 -10 -4 3 9 -7 0 -8 14 -3 9 2 -3 -10 -2 -6 1
10 -3 1 7 -7 7 -12 -5 8 6 10 -8 -8 -7 -3 9 1 6 6 -2
-3 -10 -2 3 5 2 -1 -1 -1 -1 -1 1 2 2 -1 -2 -1 0 1))
Horner's rule allows us to easily calculate the value of the polynomial at any given point:
(define (horner x cs)
(let loop ((z 0) (cs (reverse cs)))
(if (null? cs) z
(loop (+ (* x z) (car cs)) (cdr cs)))))
Then we can use bisection to find a root. Here, lo evaluates to a negative value and hi evaluates to a positive value, regardless whether lo < hi or hi < lo:
(define (bisect lo hi eps cs)
(let loop ((lo lo) (hi hi))
(if (< (abs (- hi lo)) eps)
(exact->inexact (/ (+ lo hi) 2))
(let* ((mid (/ (+ lo hi) 2))
(fmid (horner mid cs)))
(if (negative? fmid)
(loop mid hi)
(loop lo mid))))))
Conway’s polynomial evaluates to -6 at x=0, -6 at x=1, and 1256202183622743302568 at x=2, so we know there is a root between 1 and 2. That makes sense; it looks like the sequence increases by about 30% at each step. Thus, we calculate Conway’s constant as follows:
> (bisect 1 2 1e-15 cs)
1.3035772690342964
You can see Conway’s constant to a greater degree of accuracy at A014715, and you can run the program at http://programmingpraxis.codepad.org/3ia1g1r8.
My Haskell solution (see http://bonsaicode.wordpress.com/2011/03/28/programming-praxis-look-and-say-revisited/ for a version with comments):
conway :: Num a => a -> a conway x = sum $ zipWith (*) (iterate (* x) 1) [ -6, 3,-6, 12,-4, 7,-7, 1, 0, 5,-2, -4,-12, 2, 7,12,-7,-10 , -4, 3, 9, -7, 0,-8,14,-3, 9, 2,-3,-10, -2,-6, 1,10,-3, 1 , 7, -7, 7,-12,-5, 8, 6,10,-8,-8,-7, -3, 9, 1, 6, 6,-2, -3 ,-10, -2, 3, 5, 2,-1,-1,-1,-1,-1, 1, 2, 2,-1,-2,-1, 0, 1] root :: (Fractional a, Ord a) => (a -> a) -> a -> a -> a -> a root f lo hi e | abs (f mid) < e = mid | f mid > 0 = root f lo mid e | otherwise = root f mid hi e where mid = (lo + hi) / 2 main :: IO () main = print $ root conway 1 2 1e-7 == 1.303577269034296[…] today’s Programming Praxis exercise, our goal is to calculate Conway’s constant. Let’s get […]
Cheating with Sage:
I’ll come up with a real solution soon.
Your polynomial appears to have some typos, and doesn’t match what is shown at Mathworld. For example, see the x^61 term.
Fixed. Thanks.
As it turns out, I should have used
find_root(f, 1, 2)in Sageinstead. As for my own code, I went for Newton’s Method. It requires knowing
the derivative of your function (simple enough to calculate for polynomials),
so I wrote a second Horner’s Rule to build it. The nice thing about Newton’s
Method is that if the function is reasonably well-behaved in the neighborhood
of interest, it converges quadratically to the root.
from __future__ import division eps = 1e-12 def newtons_method(f, fp, x0): x, i = x0, 0 while abs(f(x)) > eps and i < 1e3: i += 1 x -= f(x) / fp(x) return x def horner(cs): return lambda x: sum(cs[n] * pow(x, n) for n in xrange(len(cs))) def horner_diff(cs): return lambda x: sum(cs[n] * n * pow(x, n - 1) for n in xrange(1, len(cs))) if __name__ == "__main__": cs = [-6, 3, -6, 12, -4, 7, -7, 1, 0, 5, -2, -4, -12, 2, 7, 12, -7, -10, -4, 3, 9, -7, 0, -8, 14, -3, 9, 2, -3, -10, -2, -6, 1, 10, -3, 1, 7, -7, 7, -12, -5, 8, 6, 10, -8, -8, -7, -3, 9, 1, 6, 6, -2, -3, -10, -2, 3, 5, 2, -1, -1, -1, -1, -1, 1, 2, 2, -1, -2, -1, 0, 1] print newtons_method(horner(cs), horner_diff(cs), 1.3)Come to think of it, I didn’t actually use Horner’s Rule to construct my
polynomials, just the naive definition. Apologies on misnaming my functions!
A version that actually uses Horner’s Rule:
from __future__ import division eps = 1e-12 def newtons_method(f, fp, x0): x, i = x0, 0 while abs(f(x)) > eps and i < 1e3: i += 1 x -= f(x) / fp(x) return x def horner(cs): return lambda x: reduce(lambda a, b: a * x + b, reversed(cs), 0) if __name__ == "__main__": cs = [-6, 3, -6, 12, -4, 7, -7, 1, 0, 5, -2, -4, -12, 2, 7, 12, -7, -10, -4, 3, 9, -7, 0, -8, 14, -3, 9, 2, -3, -10, -2, -6, 1, 10, -3, 1, 7, -7, 7, -12, -5, 8, 6, 10, -8, -8, -7, -3, 9, 1, 6, 6, -2, -3, -10, -2, 3, 5, 2, -1, -1, -1, -1, -1, 1, 2, 2, -1, -2, -1, 0, 1] f = horner(cs) fp = horner([i * cs[i] for i in xrange(1, len(cs))]) print newtons_method(f, fp, 1.3)My c++ solution :)
#include <iostream> #include <cmath> double fnCalcFunc(const int anCoef[], int nSize, double x){ double fResult = 0; for (int j = 0; j < nSize; ++j) fResult += anCoef[j] * pow(x, j); return fResult; } int main(int argc, char **argv) { const int anCoef[] = {-6, 3, -6, 12, -4, 7, -7, 1, 0, 5, -2, -4, -12, 2, 7, 12, -7, -10, -4, 3, 9, -7, 0, -8, 14, -3, 9, 2, -3, -10, -2, -6, 1, 10, -3, 1, 7, -7, 7, -12, -5, 8, 6, 10, -8, -8, -7, -3, 9, 1, 6, 6, -2, -3, -10, -2, 3, 5, 2, -1, -1, -1, -1, -1, 1, 2, 2, -1, -2, -1, 0, 1}; const int nCoef = sizeof(anCoef) / sizeof(int); int anDeri[nCoef - 1]; const double fEpsilon = 0.000001; double x = 5, fValueAtX; for (int i = 1; i < nCoef; ++i) anDeri[i - 1] = anCoef[i] * i; while ((fValueAtX = fnCalcFunc(anCoef, nCoef, x)) > fEpsilon) x = x - fValueAtX/fnCalcFunc(anDeri, nCoef - 1, x); std::cout << x << std::endl; }Factor solution. Interesting that if epsilon is set to 1e-8, Newton’s method doesn’t converge to that precision. Instead it alternately cycles between 1.303577269034296 and 1.303577269034297 until the iteration count breaks the cycle. If the precision is set to 1e-7, it converges rapidly (5 iterations) to 1.303577269034296.
CONSTANT: epsilon 1e-7 CONSTANT: conway-poly { 1 0 -1 -2 -1 2 2 1 -1 -1 -1 -1 -1 2 5 3 -2 -10 -3 -2 6 6 1 9 -3 -7 -8 -8 10 6 8 -5 -12 7 -7 7 1 -3 10 1 -6 -2 -10 -3 2 9 -3 14 -8 0 -7 9 3 -4 -10 -7 12 7 2 -12 -4 -2 5 0 1 -7 7 -4 12 -6 3 -6 } : deriv ( poly -- poly' ) 1 over length 1 - [a,b] <reversed> [ * ] 2map ; : eval-poly ( x poly -- poly(x) ) 0 [ swap pick * + ] reduce nip ; : call-f(x) ( x f -- x ) call( x -- x ) ; inline :: newton ( f f' x0 -- x ) 1 x0 [ dup f call-f(x) abs epsilon < pick 1000 > or ] [ dup [ f call-f(x) ] [ f' call-f(x) ] bi / - [ 1 + ] dip ] until nip ; : conway-constant ( -- c ) conway-poly dup deriv [ [ eval-poly ] curry ] bi@ 1.3 newton ;( scratchpad ) conway-constant .
1.303577269034296
( scratchpad )