## Integer Logarithms

### May 7, 2010

The integer logarithm of a number *n* to a base *b* is the number of times *b* can be multiplied by itself without exceeding *n*.

The implementation of the `ilog`

function in the Standard Prelude recently changed. The old version looked like this:

`(define (ilog b n)`

(if (zero? n) -1

(+ (ilog b (quotient n b)) 1)))

That takes time *O*(*n*) as it repeatedly divides by *b*. A better version takes time *O*(log *n*) as it brackets the logarithm between two possible values then uses bisection to calculate the solution.

Your task is to write a program that calculates the integer logarithm of a number to a given base in time *O*(log *n*). When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

[…] Praxis – Integer Logarithms By Remco Niemeijer In today’s Programming Praxis exercise we have to write an algorithm to find the integer logarithm of a […]

My Haskell solution (see http://bonsaicode.wordpress.com/2010/05/07/programming-praxis-integer-logarithms/ for a version with comments):

the link in last email seems to not work: http://standard-prelude/

though the alternate one did …

Python version.

Initially, I bracketed the answer by squaring the base (base, base^2, base^4, …) until it was greater than ‘n’. Then did a binary search between the last two powers. I tried optimize the routine by reusing previously calculated powers of the base. In the end I came up with the routine below, which is about twice as fast as the binary search.

Repeatedly square the base until it is larger than ‘n’, saving the calculated bases and exponents on a stack.

Then, while the stack is not empty, pop a base and exponent off the stack. If the base is <= 'n', divide 'n' by the base and add the exponent to the answer. When the stack is empty, return the answer.

def ilog( n, base=10 ):

k = 1

s = []

while base <= n:

s.append( (base, k) )

base *= base

k *= 2

ans = 0

while s:

base, k = s.pop()

if base <= n:

ans += k

n /= base

return ans

Clever idea, but I’m not sure about your timings. I found that the cost of handling the base/exponent stack outweighed the algorithmic improvement. It would take a very large

nto actually see an improvement.Somehow, I messed up the formatting above.

I went back and checked my timings, and I was correct about the timings for the inputs I had tried (123456789, and 987654321). However, I tried different input and my routine was slower. So I ran a test using lots of randomly selected numbers and bases. For most cases my algorithm was slower.

// http://gist.github.com/409723

int

ilog ( int b , int n )

{

int lo=0 , b_lo=1 , hi=1 , mid , b_mid ;

long long b_hi=b ;

while ( b_hi 1 )

{

mid = (lo + hi) / 2 ;

b_mid = b_lo * (int) pow( b , mid – lo ) ;

if ( n

b_mid ) { lo = mid ; b_lo = b_mid ; }if ( n == b_mid ) return mid ;

}

return b_hi == n ? hi : lo ;}

#! /bin/sh

#| Hey Emacs, this is -*-scheme-*- code!

exec mzscheme -l errortrace –require “$0″ –main — ${1+”$@”}

|#

;; http://programmingpraxis.com/2010/05/07/integer-logarithms/

#lang racket

(require rackunit

rackunit/text-ui)

(define (find-zero too-small too-big func)

(define (average a b)

(quotient (+ a b) 2))

(if (equal? too-big (add1 too-small))

too-small

(let* ([median (average too-small too-big)]

[fm (func median)])

(cond

((positive? fm)

(find-zero too-small median func))

((zero? fm)

median)

(else

(find-zero median too-big func))))))

;; Quickly find two numbers: one that is below the ilog of big-number,

;; and one that is above it. We’ll use these as the initial bounds of

;; our binary search.

(define (bracket-log base big-number)

(let loop ([candidate 1]

[probe base])

(if (< big-number probe)

(values (quotient candidate 2) candidate)

(loop (* 2 candidate)

(* probe probe)))))

;; For testing the above

(define (bl-pair base big-number)

(call-with-values (lambda () (bracket-log base (expt base expt)))))

(define-test-suite bracket-log-tests

(check-equal? (bl-pair 10 200) '(128 256))

(check-equal? (bl-pair 2 99) '(64 128))

(check-equal? (bl-pair 33 17) '(16 32)))

(define (ilog b n)

(cond

((exact

(truncate

(/ (log big-number)

(log base))))]

[actual (ilog base big-number)])

(check-equal? actual expected)))))

(define-test-suite all-tests

ilog-tests random-tests)

(define (main . args)

(exit (run-tests all-tests ‘verbose)))

(provide ilog main)

I failed to see why the original algorithm is of complexity O(n). It will take at most log(n) times to execute (quotient n b), right? Even if we assume that (quotient n b) is not of O(1), should we say the algorithm is of pseudo O(log(n))?

On the other hand, the loop alone in the solution that bisects the bracket range will take log(n) times in worst case, no?

Did I miss something?

Thanks,

You are correct that the original algorithm is O(log n). However, the revised algorithm is O(log log n) to bracket the range (it starts from b^hi = b and squares b^hi at each step) and O(log log n) to perform the bisection, for a total time of O(log log n). Here is a timing test showing the difference:

`> (define (ilog b n)`

(if (zero? n) -1

(+ (ilog b (quotient n b)) 1)))

> (time (ilog 2 #e1e1000))

(time (ilog 2 ...))

no collections

25 ms elapsed cpu time

49 ms elapsed real time

756088 bytes allocated

3321

> (define (ilog b n)

(let loop1 ((lo 0) (b^lo 1) (hi 1) (b^hi b))

(if (< b^hi n) (loop1 hi b^hi (* hi 2) (* b^hi b^hi))

(let loop2 ((lo lo) (b^lo b^lo) (hi hi) (b^hi b^hi))

(if (<= (- hi lo) 1) (if (= b^hi n) hi lo)

(let* ((mid (quotient (+ lo hi) 2))

(b^mid (* b^lo (expt b (- mid lo)))))

(cond ((< n b^mid) (loop2 lo b^lo mid b^mid))

((< b^mid n) (loop2 mid b^mid hi b^hi))

(else mid))))))))

> (time (ilog 2 #e1e1000))

(time (ilog 2 ...))

no collections

0 ms elapsed cpu time

0 ms elapsed real time

6032 bytes allocated

3321

Thanks! You’re right. I missed the fact that the loop loop1 in the solution multiplies b^hi by itself every time.

Hi,

I’m new here… My F# solution. Nothing clever here, just straight forward implementation of the algorithm.

let ilog_improved b n =

let rec bracket (lo:float) (hi:float) =

if (b ** hi < n) then bracket hi (hi * 2.0) else (lo, hi)

let rec bisect (lo,hi) =

if (hi – lo < 1.0) then

if (b ** hi = n) then hi else lo

else

let mid = ((lo + hi) / 2.0)

if (b ** mid) > n then

bisect (lo, mid)

else

bisect (mid, hi)

bisect (bracket 0.0 1.0)

If there is a max int size (e.g. C/C++ etc.) there’s no need for binary search… Just check if the number is > wordsize/2, then wordsize/4, then wordsize/8, etc, for base 2 log, and increment a counter for the number of bits you check. (e.g. if a number is 2^16 (65536), add 16 to the log, divide the number by 65536, then check for > 256, etc.) and other logs can be scaled using the identity, log_b(x) = log2(x) / log2(b). (Though if all floating point is disallowed, you need to know what bases to use in advance to have suitable ratios ready.) E.g. (in gforth, 64 bit integers):

create ilog4bit

-1 c, \ log 0

0 c, \ log 1

1 c, 1 c, \ log 2, log 3

2 c, 2 c, 2 c, 2 c, \ log 4 … log 7

3 c, 3 c, 3 c, 3 c, 3 c, 3 c, 3 c, 3 c, \ log 8 … log 15

: log-adjust ( n1 n2 n3 — n1/2^n3 n2+n3 )

swap over + >r rshift r> ;

: ulog ( n — n )

0

over $FFFFFFFF u> negate 5 lshift log-adjust

over $FFFF > negate 4 lshift log-adjust

over $FF > negate 3 lshift log-adjust

over $F > negate 2 lshift log-adjust

swap ilog4bit + c@ + ;

\ — base 10 logs

create pow10

1 ,

10 ,

100 ,

1000 ,

10000 ,

100000 ,

1000000 ,

10000000 ,

100000000 ,

1000000000 ,

10000000000 ,

100000000000 ,

1000000000000 ,

10000000000000 ,

100000000000000 ,

1000000000000000 ,

10000000000000000 ,

100000000000000000 ,

1000000000000000000 ,

: 10** ( n — n ) \ only try up to 18!

dup 18 u> 24 * throw

cells pow10 + @ ;

: log10 ( n — n )

dup ulog 1+ 1233 * 12 rshift \ precice enough, can also use: “8651 28738 */” which is good to 9 decimals.

dup 10** rot > + ;

[/code]