Look And Say, Revisited
March 28, 2011
Our solution uses two techniques of mathematics: Horner’s rule and bisection. First, here are the coefficients of the polynomial that computes Conway’s constant:
(define cs '(-6 3 -6 12 -4 7 -7 1 0 5 -2 -4 -12 2 7
12 -7 -10 -4 3 9 -7 0 -8 14 -3 9 2 -3 -10 -2 -6 1
10 -3 1 7 -7 7 -12 -5 8 6 10 -8 -8 -7 -3 9 1 6 6 -2
-3 -10 -2 3 5 2 -1 -1 -1 -1 -1 1 2 2 -1 -2 -1 0 1))
Horner's rule allows us to easily calculate the value of the polynomial at any given point:
(define (horner x cs)
(let loop ((z 0) (cs (reverse cs)))
(if (null? cs) z
(loop (+ (* x z) (car cs)) (cdr cs)))))
Then we can use bisection to find a root. Here, lo evaluates to a negative value and hi evaluates to a positive value, regardless whether lo < hi or hi < lo:
(define (bisect lo hi eps cs)
(let loop ((lo lo) (hi hi))
(if (< (abs (- hi lo)) eps)
(exact->inexact (/ (+ lo hi) 2))
(let* ((mid (/ (+ lo hi) 2))
(fmid (horner mid cs)))
(if (negative? fmid)
(loop mid hi)
(loop lo mid))))))
Conway’s polynomial evaluates to -6 at x=0, -6 at x=1, and 1256202183622743302568 at x=2, so we know there is a root between 1 and 2. That makes sense; it looks like the sequence increases by about 30% at each step. Thus, we calculate Conway’s constant as follows:
> (bisect 1 2 1e-15 cs)
1.3035772690342964
You can see Conway’s constant to a greater degree of accuracy at A014715, and you can run the program at http://programmingpraxis.codepad.org/3ia1g1r8.
My Haskell solution (see http://bonsaicode.wordpress.com/2011/03/28/programming-praxis-look-and-say-revisited/ for a version with comments):
[…] today’s Programming Praxis exercise, our goal is to calculate Conway’s constant. Let’s get […]
Cheating with Sage:
I’ll come up with a real solution soon.
Your polynomial appears to have some typos, and doesn’t match what is shown at Mathworld. For example, see the x^61 term.
Fixed. Thanks.
As it turns out, I should have used
find_root(f, 1, 2)
in Sageinstead. As for my own code, I went for Newton’s Method. It requires knowing
the derivative of your function (simple enough to calculate for polynomials),
so I wrote a second Horner’s Rule to build it. The nice thing about Newton’s
Method is that if the function is reasonably well-behaved in the neighborhood
of interest, it converges quadratically to the root.
Come to think of it, I didn’t actually use Horner’s Rule to construct my
polynomials, just the naive definition. Apologies on misnaming my functions!
A version that actually uses Horner’s Rule:
My c++ solution :)
Factor solution. Interesting that if epsilon is set to 1e-8, Newton’s method doesn’t converge to that precision. Instead it alternately cycles between 1.303577269034296 and 1.303577269034297 until the iteration count breaks the cycle. If the precision is set to 1e-7, it converges rapidly (5 iterations) to 1.303577269034296.
( scratchpad ) conway-constant .
1.303577269034296
( scratchpad )