## Three Powering Algorithms

### March 3, 2015

The linear-time function just multiplies the base number as many times as necessary:

`(define (pow-linear b e)`

(if (zero? e) 1

(* b (pow-linear b (- e 1)))))

That function is written recursively so that tracing the function will show that it is linear:

`> (trace pow-linear)`

(pow-linear)

> (pow-linear 2 16)

|(pow-linear 2 16)

| (pow-linear 2 15)

| |(pow-linear 2 14)

| | (pow-linear 2 13)

| | |(pow-linear 2 12)

| | | (pow-linear 2 11)

| | | |(pow-linear 2 10)

| | | | (pow-linear 2 9)

| | | | |(pow-linear 2 8)

| | | | | (pow-linear 2 7)

| | | |[10](pow-linear 2 6)

| | | |[11](pow-linear 2 5)

| | | |[12](pow-linear 2 4)

| | | |[13](pow-linear 2 3)

| | | |[14](pow-linear 2 2)

| | | |[15](pow-linear 2 1)

| | | |[16](pow-linear 2 0)

| | | |[16]1

| | | |[15]2

| | | |[14]4

| | | |[13]8

| | | |[12]16

| | | |[11]32

| | | |[10]64

| | | | | 128

| | | | |256

| | | | 512

| | | |1024

| | | 2048

| | |4096

| | 8192

| |16384

| 32768

|65536

65536

The logarithmic-time function squares the base at each step, including in the final product that steps where the corresponding bit of the exponent is odd:

`(define (pow-logarithmic b e)`

(cond ((zero? e) 1)

((even? e) (pow-logarithmic (* b b) (/ e 2)))

(else (* b (pow-logarithmic (* b b) (/ (- e 1) 2))))))

```
```

`> (trace pow-logarithmic)`

(pow-logarithmic)

> (pow-logarithmic 2 16)

|(pow-logarithmic 2 16)

|(pow-logarithmic 4 8)

|(pow-logarithmic 16 4)

|(pow-logarithmic 256 2)

|(pow-logarithmic 65536 1)

| (pow-logarithmic 4294967296 0)

| 1

|65536

65536

The constant-time function uses logarithms, and is subject to floating-point error:

`(define (pow-constant b e)`

(exp (* (log b) e)))

```
```

`> (trace pow-constant)`

(pow-constant)

> (pow-constant 2 16)

|(pow-constant 2 16)

|65535.99999999998

65535.99999999998

You can run the program at http://ideone.com/UUEp0c.

In Haskell:

I asked a question on Stack Overflow about this O(1) solution. And the author of this post just answered me. I thought it would be a good idea to also post the answer here for who may have the same question.

Question:

I am trying to understand how to implement the power operation with time complexity O(1) in the following post.

https://programmingpraxis.com/2015/03/03/three-powering-algorithms/2/

Can anyone explain why this algorithm works? And why it is O(1)?

Answer:

First, it works because that’s the mathematical definition of logarithms. To multiply two numbers, take their logarithms, add the logarithms together, then take the anti-logarithm of the sum; in programming terms: x × y = exp ( log(x) + log(y) ). The powering operation takes the logarithm of the base, multiplies the logarithm by the exponent, then takes the anti-logarithm of the product; in programming terms, x y = exp ( log(x) × y ).

Second, it is O(1) only if you cheat. There is only a single (constant) operation involving the exponent, so the operation is constant-time only with respect to the exponent. But arithmetic takes time, in fact arithmetic on n-bit numbers takes time log(n), but we ignore that. As a practical matter, computers only provide logarithms up to some small limit, which is usually 3.4 × 10^38 for single-precision floats and 1.8 × 10^308 for double-precision floats. Within that range, the functions used internally are close enough to constant-time that we say taking logarithms and exponents is O(1). If you use a big-decimal library such as GMP, with floats that are very much larger, it will be incorrect to say that the arithmetic is done in constant time. So I guess it depends on how precise you want to be.

P.S.

Thanks very much for the post. It is very helpful.