Look And Say, Revisited

March 28, 2011

We calculated the look and say sequence in a previous exercise, and mentioned there that the sequence has some fascinating mathematical properties. One of them is that, if Ln is the number of digits in the n-th element of the sequence, then

\lim_{n \rightarrow \infty} \frac{L_{n+1}}{L_n} =  \lambda

where λ is known as Conway’s constant, calculated by the British mathematician John Horton Conway as the unique positive real root of the following polynomial:

x^{71} - x^{69} - 2 x^{68} - x^{67} + 2 x^{66}  + 2 x^{65} + x^{64} - x^{63} - x^{62} - x^{61} -  x^{59} + 2 x^{58} + 5 x^{57} + 3 x^{56} - 2 x^{55} -  10 x^{54} - 3 x^{53} - 2 x^{52} + 6 x^{51} + 6 x^{50}  + x^{49} + 9 x^{48} - 3 x^{47} - 7 x^{46} - 8 x^{45}  - 8 x^{44} + 10 x^{43} + 6 x^{42} + 8 x^{41} - 5  x^{40} - 12 x^{39} + 7 x^{38} - 7 x^{37} + 7 x^{36} +  x^{35} - 3 x^{34} + 10 x^{33} + x^{32} - 6 x^{31} - 2  x^{30} - 10 x^{29} - 3 x^{28} + 2 x^{27} + 9 x^{26} -  3 x^{25} + 14 x^{24} - 8 x^{23} - 7 x^{21} + 9 x^{20}  + 3 x^{19} - 4 x^{18} - 10 x^{17} - 7 x^{16} + 12  x^{15} + 7 x^{14} + 2 x^{13} - 12 x^{12} - 4 x^{11} -  2 x^{10} + 5 x^9 + x^7 - 7 x^6 + 7 x^5 - 4x^4 + 12 x^3  - 6 x^2 + 3 x - 6

Conway analyzed the look and say sequence in his paper “The Weird and Wonderful Chemistry of Audioactive Decay” published in Eureka, Volume 46, Pages 5−18 in 1986. In his blog, Nathaniel Johnston gives a derivation of the polynomial.

Your task is to compute the value of λ. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

9 Responses to “Look And Say, Revisited”

  1. My Haskell solution (see http://bonsaicode.wordpress.com/2011/03/28/programming-praxis-look-and-say-revisited/ for a version with comments):

    conway :: Num a => a -> a
    conway x = sum $ zipWith (*) (iterate (* x) 1)
        [ -6,  3,-6, 12,-4, 7,-7, 1, 0, 5,-2, -4,-12, 2, 7,12,-7,-10
        , -4,  3, 9, -7, 0,-8,14,-3, 9, 2,-3,-10, -2,-6, 1,10,-3,  1
        ,  7, -7, 7,-12,-5, 8, 6,10,-8,-8,-7, -3,  9, 1, 6, 6,-2, -3
        ,-10, -2, 3,  5, 2,-1,-1,-1,-1,-1, 1,  2,  2,-1,-2,-1, 0,  1]
    
    root
     :: (Fractional a, Ord a) => (a -> a) -> a -> a -> a -> a
    root f lo hi e | abs (f mid) < e = mid
                   | f mid > 0       = root f lo mid e
                   | otherwise       = root f mid hi e
                   where mid = (lo + hi) / 2
    
    main :: IO ()
    main = print $ root conway 1 2 1e-7 == 1.303577269034296
    
  2. […] today’s Programming Praxis exercise, our goal is to calculate Conway’s constant. Let’s get […]

  3. Graham said

    Cheating with Sage:

    sage [1]: cs = [-6,3,-6,12,-4,7,-7,1,0,5,-2,-4,-12,2,7,12,-7,-10,-4,3,9,-7,0,-8,
    ....:         14,-3,9,2,-3,-10,-2,-6,1,10,-3,1,7,-7,7,-12,-5,8,6,10,-8,-8,-7,-3,9,1,
    ....:         6,6,-2,-3,-10,-2,3,5,2,-1,-1,-1,-1,-1,1,2,2,-1,-2,-1,0,1]
    sage [2]: f = 0
    sage [3]: for n in xrange(72):
    ....:         f += x^n * cs[n]
    ....:
    sage [4]: f.roots(ring=RR)
    [(-1.08824411254383, 1), (-1.01115382010126, 1), (1.30357726903430, 1)]
    

    I’ll come up with a real solution soon.

  4. Mike said

    Your polynomial appears to have some typos, and doesn’t match what is shown at Mathworld. For example, see the x^61 term.

  5. programmingpraxis said

    Fixed. Thanks.

  6. Graham said

    As it turns out, I should have used find_root(f, 1, 2) in Sage
    instead. As for my own code, I went for Newton’s Method. It requires knowing
    the derivative of your function (simple enough to calculate for polynomials),
    so I wrote a second Horner’s Rule to build it. The nice thing about Newton’s
    Method is that if the function is reasonably well-behaved in the neighborhood
    of interest, it converges quadratically to the root.

    from __future__ import division
    
    eps = 1e-12
    
    def newtons_method(f, fp, x0):
        x, i = x0, 0
        while abs(f(x)) > eps and i < 1e3:
            i += 1
            x -= f(x) / fp(x)
        return x
    
    def horner(cs):
        return lambda x: sum(cs[n] * pow(x, n) for n in xrange(len(cs)))
    
    def horner_diff(cs):
        return lambda x: sum(cs[n] * n * pow(x, n - 1) for n in xrange(1, len(cs)))
    
    if __name__ == "__main__":
        cs = [-6, 3, -6, 12, -4, 7, -7, 1, 0, 5, -2, -4, -12, 2, 7, 12, -7, -10,
                -4, 3, 9, -7, 0, -8,  14,  -3, 9, 2, -3, -10, -2, -6, 1, 10, -3, 1,
                7, -7, 7, -12, -5, 8, 6, 10, -8, -8, -7, -3, 9, 1,  6, 6, -2, -3,
                -10, -2, 3, 5, 2, -1, -1, -1, -1, -1, 1, 2, 2, -1, -2, -1, 0, 1]
        print newtons_method(horner(cs), horner_diff(cs), 1.3)
    
  7. Graham said

    Come to think of it, I didn’t actually use Horner’s Rule to construct my
    polynomials, just the naive definition. Apologies on misnaming my functions!
    A version that actually uses Horner’s Rule:

    from __future__ import division
    
    eps = 1e-12
    
    def newtons_method(f, fp, x0):
        x, i = x0, 0
        while abs(f(x)) > eps and i < 1e3:
            i += 1
            x -= f(x) / fp(x)
        return x
    
    def horner(cs):
        return lambda x: reduce(lambda a, b: a * x + b, reversed(cs), 0)
    
    if __name__ == "__main__":
        cs = [-6, 3, -6, 12, -4, 7, -7, 1, 0, 5, -2, -4, -12, 2, 7, 12, -7, -10,
                -4, 3, 9, -7, 0, -8,  14,  -3, 9, 2, -3, -10, -2, -6, 1, 10, -3, 1,
                7, -7, 7, -12, -5, 8, 6, 10, -8, -8, -7, -3, 9, 1,  6, 6, -2, -3,
                -10, -2, 3, 5, 2, -1, -1, -1, -1, -1, 1, 2, 2, -1, -2, -1, 0, 1]
        f = horner(cs)
        fp = horner([i * cs[i] for i in xrange(1, len(cs))])
        print newtons_method(f, fp, 1.3)
    
  8. arturasl said

    My c++ solution :)

    #include      <iostream>
    #include      <cmath>
    
    double fnCalcFunc(const int anCoef[], int nSize, double x){
    	double fResult = 0;
    	for (int j = 0; j < nSize; ++j) fResult += anCoef[j] * pow(x, j);
    	return fResult;
    }
    
    int main(int argc, char **argv) {
    	const int anCoef[] = {-6, 3, -6, 12, -4, 7, -7, 1, 0, 5, -2, -4, -12, 2, 7, 12, -7, -10, -4, 3, 9, -7, 0, -8, 14, -3, 9, 2, -3, -10, -2, -6, 1, 10, -3, 1, 7, -7, 7, -12, -5, 8, 6, 10, -8, -8, -7, -3, 9, 1, 6, 6, -2, -3, -10, -2, 3, 5, 2, -1, -1, -1, -1, -1, 1, 2, 2, -1, -2, -1, 0, 1};
    	const int nCoef = sizeof(anCoef) / sizeof(int);
    	int anDeri[nCoef - 1];
    	const double fEpsilon = 0.000001;
    	double x = 5, fValueAtX;
    
    	for (int i = 1; i < nCoef;  ++i) anDeri[i - 1] = anCoef[i] * i;
    
    	while ((fValueAtX = fnCalcFunc(anCoef, nCoef, x)) > fEpsilon)
    		x = x - fValueAtX/fnCalcFunc(anDeri, nCoef - 1, x);
    
    	std::cout << x << std::endl;
    }
    
  9. David said

    Factor solution. Interesting that if epsilon is set to 1e-8, Newton’s method doesn’t converge to that precision. Instead it alternately cycles between 1.303577269034296 and 1.303577269034297 until the iteration count breaks the cycle. If the precision is set to 1e-7, it converges rapidly (5 iterations) to 1.303577269034296.

    CONSTANT: epsilon 1e-7
    
    CONSTANT: conway-poly
       { 1    0   -1   -2   -1     2     2    1    -1   -1   -1   -1 
        -1    2    5    3   -2   -10    -3   -2     6    6    1    9
        -3   -7   -8   -8   10     6     8   -5   -12    7   -7    7
         1   -3   10    1   -6    -2   -10   -3     2    9   -3    14
        -8    0   -7    9    3    -4   -10   -7    12    7    2   -12
        -4   -2    5    0    1    -7     7   -4    12   -6    3    -6 }
    
    : deriv ( poly -- poly' )
        1  over length 1 -  [a,b] <reversed> [ * ] 2map ;
    
    : eval-poly ( x poly -- poly(x) )
        0 [ swap pick * + ] reduce nip ;
    
    : call-f(x)  ( x f -- x )
        call( x -- x ) ; inline
    
    :: newton ( f f' x0 -- x )
        1 x0
        [ dup f call-f(x) abs epsilon <  pick 1000 >  or ]
            [ dup [ f call-f(x) ] [ f' call-f(x) ] bi / -   [ 1 + ] dip ]
        until nip ;
    
    : conway-constant ( -- c )
       conway-poly dup deriv 
       [ [ eval-poly ] curry ] bi@  1.3  newton ;
    

    ( scratchpad ) conway-constant .
    1.303577269034296
    ( scratchpad )

Leave a comment