Brent’s Root Finder

March 28, 2014

[ Today’s exercise was written by guest author Paul Hofstra, who holds a PhD in Theoretical Nuclear Physics from the Free University in Amsterdam and is now retired from a major oil company. Guest authors are always welcome; contact me at the link in the menu bar if you are interested in writing for Programming Praxis. ]

We discussed various algorithms for finding the roots of a function in two previous exercises. In the first of those exercises, we looked at the bisection method, which is “slow and steady” as it always finds a solution, and the secant method, which is sometimes very fast but also sometimes very slow. In the second of those two exercises, we looked at an algorithm of Theodorus Dekker that combines the bisection and secant methods in a way that retains the speed of the secant method where possible but retains the steadiness of the bisection method where necessary. Later, Richard Brent (the same Brent that improved Pollard’s rho algorithm for finding factors), observed that there are still some functions for which Dekker’s algorithm performs poorly, and he developed the method that is most commonly used today.

Brent augmented the linear interpolation of the secant method with a quadratic interpolation that tends to get closer to the root more quickly than the secant method, and he made conditions that force a bisection step at least every three steps, so his method can never be worse than three times the bisection method, and usually much better. Brent calculates a new point using both the secant and bisection methods, as does Dekker, but instead of simply accepting the secant method when it is better, he calculates another potential point using the quadratic method and takes that instead if it is better than the secant point. You can read Brent’s description of his algorithm at http://maths-people.anu.edu.au/~brent/pd/rpb005.pdf.

Your task is to implement Brent’s root finder. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Factoring RSA Moduli

March 24, 2014

We studied RSA encryption in a previous exercise. To recap: select two prime numbers p and q and make their product n = p q the modulus of the crypto system. Choose an encryption key e and a decryption key d such that d e ≡ 1 (mod φ(p q)), where the totient φ(p q) = (p − 1) × (q − 1). Then encrypt a message m as c = me (mod n) and decrypt the cipher test c as m = cd.

In our implementation we chose p and q randomly and kept them secret, even from the user who knows the decryption key. Of course, it is hard for someone who knows who does not know d to factor n and thus compute d; that is the basis of the encryption. But it is fairly easy to recover p and q if you know n, e and d, as Boneh showed (page 205, left column, Fact 1):

  1. [Initialize] Set k d e − 1.
  2. [Try a random g] Choose g at random from {2, …, n − 1} and set tk.
  3. [Next t] If t is divisible by 2, set tt / 2 and xg t (mod n). Otherwise go to Step 2.
  4. [Finished?] If x > 1 and y = gcd(x −1, n) > 1 then set py and q &larrow; n / y, output p and q and terminate the algorithm. Otherwise go to Step 3.

For instance, given n = 25777, e = 3 and d = 16971, we calculate p = 149 and q = 173 when g = 5, noting that 149 × 173 = 25777; on average, it takes no more than two different g to complete the algorithm. Knowledge of p and q, though not necessary, is useful, because a more efficient decryption is possible using p, q and the Chinese remainder theorem, which does arithmetic with numbers the size of p and q instead of the size of their product.

Your task is to write a program that factors RSA moduli. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Root Finding, Again

March 21, 2014

[ Today’s exercise was written by guest author Paul Hofstra, who holds a PhD in Theoretical Nuclear Physics from the Free University in Amsterdam and is now retired from a major oil company. Guest authors are always welcome; contact me at the link in the menu bar if you are interested in writing for Programming Praxis. ]

We studied root finding by the bisection and regula falsi methods in a previous exercise. Both methods have linear convergence, meaning that the error is reduced by a constant factor at each iteration; for example, for bisection the error reduces by half at each iteration. Of the two methods, the regula falsi method can be fast some times and slow other times, so the more consistent bisection method is generally preferred. In today’s exercise we look at two other methods, the secant method and Dekker’s method, which both have super-linear convergence.

The secant method is similar to regula falsi, but whereas the regula falsi method holds one endpoint constant, the secant method can move either endpoint, using the last two iterates to calculate the new point. That means the secant method can wander off outside the original interval in its attempt to find a root.

Theodorus Dekker’s 1969 method combines the root bracketing property of the bisection method with the super-linear convergence of the secant method. Dekker uses three points, a and b which bracket the solutions (so f(a) and f(b) have opposite signs) and c which tracks the last position of b. Every iteration does:

  1. If f(b is not the smallest of the three points (in an absolute sense), then swap a and b and make c equal to a.
  2. Calculate the bisections step x = (ab) / 2.
  3. If f(b) is zero or |x| is less than the desired tolerance return b.
  4. Calculate the secant step s, making sure not to divide by zero.
  5. Set c to the current value of b. If s is in the interval [0, x] then set b = b + s, otherwise set b = b + x.
  6. If the new b has the opposite sign as the old b, set a to c.

Step 5 contains the main logic. If the secant step is between b and (ab) / 2, then take the secant step, otherwise take the bisection step. Dekker’s method is simple, very powerful, fast, and works on any continuous function.

Your task is to write programs that implement the secant and Dekker root finders. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Lucky Numbers

March 18, 2014

We calculated Josephus numbers in a previous exercise. In today’s exercise, we study Lucky Numbers, which are those positive integers that survive the Sieve of Josephus. It works like this:

Start with the numbers 1 through n, where n is the desired limit of the sieve; we’ll illustrate with n = 20: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20. At the first step, remove every second number from the sieve, starting from the 0’th: this leaves 1, 3, 5, 7, 9, 11, 13, 15, 17, 19. At the second step, the second number in the list is 3, so remove every third number from the sieve, starting from the 0’th: this leaves 1, 3, 7, 9, 13, 15, 19. At the third step, the third number in the list is 7, so remove every seventh number from the sieve, starting from the 0’th: this leaves 1, 3, 7, 9, 13, 15. And so on. The lucky numbers are listed at A000959: 1, 3, 7, 9, 13, 15, 21, 25, 31, 33, 37, 43, 49, 51, 63, 67, 69, 73, 75, 79, 87, 93, 99, 105, 111, 115, 127, 129, 133, 135, 141, 151, 159, 163, 169, 171, 189, 193, 195, 201, 205, 211, 219, 223, 231, 235, 237, 241, 259, 261, 267, 273, 283, 285, 289, 297, 303, …. Lucky numbers share many properties with prime numbers, including the same asymptotic behavior as the primes: any given number n is lucky with probability 1 / loge n, the same as any given number n is prime with probability 1 / loge n.

Your task is to write a program that lists the lucky numbers less than n. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Root Finding

March 14, 2014

Finding the roots of an equation is an important operation in mathematics, and there are many algorithms for finding roots numerically. We will study two of those algorithms in today’s exercise, and perhaps we will look at other algorithms in future exercises.

Let’s begin with the rules of the game. We are given a univariate function f(x) and want to find one or more values of x such that f(x) = 0. The function could be a polynomial, or could use trigonometric or exponential functions, or integrals, or any other mathematical operation.

The bisection algorithm starts with two points lo and hi that bound the root; thus, one of f(lo) and f(hi) is positive and the other is negative. The algorithm is iterative; at each step, the midpoint mid = (lo + hi) / 2 is found, the function f(mid) is evaluated at the midpoint, and then it replaces either lo or hi, whichever has the same sign. Iteration stops if f(mid) = 0 or if the difference between lo and hi is sufficiently small.

The regula falsi method is similar, but instead of calculating the center midpoint it calculates the midpoint at the point where a line connecting the current lo and hi crosses the axis. The method dates to the ancient Egyptians and Babylonians.

Both methods work only when the function f is continuous, with no discontinuities between lo and hi.

Your task is to write functions that find roots by the bisection and regula falsi methods. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Caesar Cipher

March 11, 2014

I was asked about a caesar cipher the other day, and suggested that my questioner search for caesar cipher here at Programming Praxis. To my surprise, I discovered that we have never done a caesar cipher (we did the ROT13 cipher, but not a general-purpose caesar cipher), so I apologized to my friend and wrote today’s exercise.

A caeser cipher, named after Julius Caesar, who either invented the cipher or was an early user of it, is a simple substitution cipher in which letters are substituted at a fixed distance along the alphabet, which cycles; children’s magic decoder rings implement a caesar cipher. Non-alphabetic characters are passed unchanged. For instance, the plaintext PROGRAMMINGPRAXIS is rendered as the ciphertext SURJUDPPLQJSUDALV with a shift of 3 positions.

Your task is to write functions that encipher and decipher text using a caesar cipher. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

We have seen the n−1 primality prover in one previous exercise and the n+1 primality prover in another previous exercise. In today’s exercise we combine the two tests. There are two versions, one using a factoring bound and one without. The basic idea of both provers is to express n − 1 = f1 * r1 and n + 1 = f2 * r2 with r1 and r2 odd, gcd(f1, r1) = gcd(f2, r2) = 1 and the prime factorizations of f1 and f2 known, then perform various tests if the partial factorizations are sufficient. This is often called the BLS primality prover because the 1975 paper by John Brillhart, Derrick Lehmer and John Selfridge proves all the necessary theorems.

For the unbounded version of the test, factor n−1 and n+1 by any convenient method until max(f1*f1*f2/2, f1*f2*f2/2) > n. Then n is prime if all the factors of f1 and f2 are prime and the following conditions hold:

Condition 1: For each prime p dividing f1 there is an integer a such that a^(n−1) = 1 (mod n) and gcd(a^((n−1)/p)−1, n) = 1. A different a may be used for each prime p.

Condition 2: For each prime p there is a Lucas sequence U(P,Q) with discriminant D = P^2 – 4Q 0 and the jacobi symbol (D/N) = −1 such that U(n+1) = 0 (mod n) and gcd(U((n+1)/p), n) = 1. The same discriminant D must be used for each prime p, but P and Q may vary. Given P and Q, a new P‘ and Q‘ with the same D can be calculated as P‘ = P + 2 and Q‘ = P + Q + 1.

The bounded version is similar. Factor n−1 and n+1 by trial division until max(b*f1+1, b*f2−1) * (b*b*f1*f2/2 + 1) > n, where b is the smallest number that is known to be a factor of neither r1 nor r2; if you’re clever, you can perform the two trial divisions simultaneously, looking for remainders of either 0 or 2 when dividing n+1 by the current prime. Then n is prime if Condition 1 and Condition 2 and the following additional conditions hold:

Condition 3: There is an integer a such that a^(n−1) = 1 (mod n) and gcd(a^((n−1)/R1)-1, n) = 1.

Condition 4: There is a Lucas sequence U(P,Q) with the same discriminant D as in Condition 2 above such that U(n+1) = 0 (mod n) and gcd(U((n+1)/R2), n) = 1.

In practice, it is common to write a single program that includes both versions. First perform trial division up to some convenient limit, using the bounded formula to determine if you have enough factors. If that is unsuccessful, use some other method (Pollard rho or elliptic curves work well, because they are good at finding small factors of large numbers) to find more factors, recursively prove any such factors prime, and continue until you have enough.

Your task is to implement the BLS primality prover. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Learn A New Language

March 4, 2014

In today’s exercise you are challenged to write a program in a language you’ve never used before. We’ve done this before, and it’s fun; the idea is to get you out of your comfort zone, so you’re thinking about programming, not just blindly following habit.

Your task is to write a program in a language you’ve never used before. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2