## Fibonacci Numbers

### July 30, 2010

One of the first functions taught to programmers who are just learning about recursion is the function to compute the fibonacci numbers. The naive function takes exponential time, as each recursive call must compute the values of smaller fibonacci numbers, so programmers are next taught how to remove the recursion by explicitly storing state information, giving a linear-time iterative algorithm. The upshot is that programming students are left with the impression that recursion is bad and iteration is good.

It is actually possible to improve the performance with a logarithmic-time algorithm. Consider the matrices

Each time the matrix is multiplied by itself, the number in the lower left-hand corner is the next fibonacci number; for instance, *F*_{4}=3 (*F*_{0}=0 is a special case). Of course, powering can be done using a binary square-and-multiply algorithm, as in the `ipow`

and `expm`

functions of the Standard Prelude, giving a logarithmic algorithm for computing the *n*th fibonacci number.

Your task is to write the three fibonacci functions — exponential, linear, and logarithmic — described above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

[…] Praxis – Fibonacci Numbers By Remco Niemeijer In today’s Programming Praxis we have to provide three different methods of calculating the ever-popular […]

My Haskell solution (see http://bonsaicode.wordpress.com/2010/07/30/programming-praxis-fibonacci-numbers/ for a version with comments):

I cheated a bit to get the logic of my matrix_mul and matrix_pow right, looking at Remco’s very clean answer and some other sources (I haven’t had coffee yet today…)

Here’s an inelegant Python implementation; apologies for the ugly matrix_mul definition.

…and also apologies about overindenting the second line of matrix_mul. It didn’t look right in the comment box, but I should have left it alone.

I really appreciate your excellent blog. Please keep up the good work!

Here is a logarithmic version — but using base 3 instead of base 2.

Oops, left off the link to codepad.org.

Alternative (and, admittedly, slightly hacky, as evidenced by the warnings you get if you don’t specify -fno-warn-missing-methods and -fno-warn-orphans) Haskell solution for the logarithmic version, based on the fact that the default power operator is already logarithmic:

An alternate scheme implementation with minor bit diddling.

And since the m[0,0] = m[1,1]+m[1,0]…

That will teach me to proofread.

Err!!! delete that backslash. Praxis needs a way for responders to edit their mistakes!

please give also solution in c langauge pleaseeeeeeeee.

After writing this exercise, I came across Dijkstra’s paper describing a set of recurrence equations that calculate the

nth fibonacci number. His method is orthogonal to the matrix method described above, but more convenient for computation. Note that Dijkstra starts the sequence differently; where we sayF_{0}=0, Dijkstra saysF_{0}=1. Here’s the code:`(define (fib n)`

(define (square x) (* x x))

(cond ((zero? n) 0) ((or (= n 1) (= n 2)) 1)

((even? n) (let* ((n2 (quotient n 2)) (n2-1 (- n2 1)))

(* (fib n2) (+ (* 2 (fib n2-1)) (fib n2)))))

(else (let* ((n2-1 (quotient n 2)) (n2 (+ n2-1 1)))

(+ (square (fib n2-1)) (square (fib n2)))))))

I’ve seen difference equations (e.g., fibonacci) expressed as matrices before, but this was the first time I’ve seen the concepts of logarithmic-time exponentiation, matrices, and difference equations all in one problem. This was a genuinely entertaining exercise; thanks for posting.

Since the core of this problem was essentially finding a O(log(n)) algorithm for matrix multiplication, I also wrote a O(1) solution based upon diagonalization. Looks like it works, but for very large values, it may suffer from accuracy issues due to the use of inexact numbers.

[ I fixed the formatting. In the future, look at the code formatting how-to for instructions. Thanks for your kind words, and for an interesting variation on the problem. — PP ]

public void fibonacci(int firstTerm, int secTerm)

{

if (nextTerm == 75025) return;

nextTerm = firstTerm + secTerm;

System.out.println(counter+”. “+nextTerm);

fibonacci(secTerm, nextTerm);

}

Goes to 25th term, based on firstTerm = 0 and secTerm = 1.

Gil Martinez, I think your O(1) approximation is about as honest as the “O(log n)” algorithm we see here. It’s actually impossible to achieve O(log n) time for calculating arbitrarily large Fibonacci numbers: to represent fib(n) requires O(log(fib(n))) bits. But as shown at this site, fib(n)=round(φn / √5), which is Θ(φn), so we need Θ(log(φn))=Θ(n) bits, which we can’t possibly produce in O(log n) time. What I don’t know is whether the naive algorithm is actually O(n) or O(n log n).

Those were all supposed to be φ^n. Not sure where that went screwy.

I missed something more important: The PDPFib function is defined using (expt eig1 n). I don’t see how that function can possibly be O(1), even assuming that addition, subtraction, multiplication and division are.