Three Powering Algorithms

March 3, 2015

In mathematics, the powering operation multiplies a number by itself a given number of times. For instance, the powering operation pow(2,3) multiplies 2 × 2 × 2 = 8.

Your task is to write three functions that implement the powering operation, with time complexities O(n), O(log n) and O(1) in the exponent. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Advertisement

Pages: 1 2

2 Responses to “Three Powering Algorithms”

  1. Graham said

    In Haskell:

    module Main where
    import Data.Word
    
    -- naive
    linear :: (Num a) => a -> Word -> a
    linear x n = product $ replicate (fromIntegral n) x
    
    -- tail recursive square & multiply
    logarithmic :: (Num a) => a -> Word -> a
    logarithmic x n = go x 1 n
        where
            go a b 0 = b
            go a b k = go (a * a) (if odd k then a * b else b) (k `div` 2)
    
    -- cheating, with errors; uses Haskell like a slide rule
    constant :: (Floating a) => a -> Word -> a
    constant x n = exp $ (log x) * (fromIntegral n)
    
    main :: IO ()
    main = mapM_ print $ map (\f -> map (f 2) [0..10]) [linear, logarithmic, constant]
    
  2. Den said

    I asked a question on Stack Overflow about this O(1) solution. And the author of this post just answered me. I thought it would be a good idea to also post the answer here for who may have the same question.

    Question:
    I am trying to understand how to implement the power operation with time complexity O(1) in the following post.

    https://programmingpraxis.com/2015/03/03/three-powering-algorithms/2/

    Can anyone explain why this algorithm works? And why it is O(1)?

    Answer:
    First, it works because that’s the mathematical definition of logarithms. To multiply two numbers, take their logarithms, add the logarithms together, then take the anti-logarithm of the sum; in programming terms: x × y = exp ( log(x) + log(y) ). The powering operation takes the logarithm of the base, multiplies the logarithm by the exponent, then takes the anti-logarithm of the product; in programming terms, x y = exp ( log(x) × y ).

    Second, it is O(1) only if you cheat. There is only a single (constant) operation involving the exponent, so the operation is constant-time only with respect to the exponent. But arithmetic takes time, in fact arithmetic on n-bit numbers takes time log(n), but we ignore that. As a practical matter, computers only provide logarithms up to some small limit, which is usually 3.4 × 10^38 for single-precision floats and 1.8 × 10^308 for double-precision floats. Within that range, the functions used internally are close enough to constant-time that we say taking logarithms and exponents is O(1). If you use a big-decimal library such as GMP, with floats that are very much larger, it will be incorrect to say that the arithmetic is done in constant time. So I guess it depends on how precise you want to be.

    P.S.
    Thanks very much for the post. It is very helpful.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: