## N+1 Primality Prover

### February 7, 2014

In the previous exercise we used a Lucas sequence to determine if a number is probably prime. In today’s exercise we use a Lucas sequence to prove the primality of a number N, given the factors of N + 1. Today’s algorithm is similar to a previous exercise in which we proved the primality of a number N using the factors of N − 1.

Choose P and Q such that D = P2 − 4 Q is not a square modulo N. Let N + 1 = F · R with F > R, where R is odd and the prime factorization of F is known. If there exists a Lucas sequence of discriminant D with U(N+1) ≡ 0 (mod N) and gcd(U((N+1)/q), N) = 1 for each prime q dividing F, then N is prime; if no such sequence exists for a given P and Q, a new P‘ and Q‘ with the same D can be computed as P‘ = P + 2 and Q‘ = P + Q + 1 (the same D must be used for all the factors q).

Your task is to write a program that proves the primality of a number. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## Lucas Pseudoprimes

### February 4, 2014

Since we now have a function to compute Lucas sequences, we can take another look at testing primality using Lucas pseudoprimes.

The standard version of the Lucas pseudoprime test chooses a suitableP and Q, computes the Lucas sequence Un+1(P,Q), and declares n either prime or a Lucas pseudoprime if Un+1(P,Q) ≡ 0 (mod n). The strong version of the Lucas pseudoprime test calculates n = d · 2s + 1 with d odd, and declares n either prime or a Lucas pseudoprime if Ud(P,Q) ≡ 0 (mod n) or if Vd 2r (P,Q) ≡ 0 (mod n) for any r with 0 ≤ r < s. John Selfridge suggested a suitable method for choosing P and Q: choose D as the smallest number in the sequence 5, -7, 9, -11, 13, -15, 17, … for which the Jacobi symbol (D / n) = -1, then set P = 1 and Q = (1 – D) / 4.

Robert Baillie and Sam Wagstaff developed a pseudoprimality test based on Lucas pseudoprimes. Their procedure has four steps: 1) If n is divisible by any prime less than 1000, then n is composite; 2) if n is not a strong probable prime to base 2, then n is composite; 3) choose parameters P and Q and declare n to be composite if it is a square; 4) If n is not a Lucas probable prime with parameters P and Q, using either the standard or strong version of the Lucas test, then n is composite. Otherwise, n is almost certainly prime. The procedure can fail only by asserting that a composite number is prime; it will never report that a prime number is composite.

Your task is to write functions to determine if n is a probable prime using the standard and strong Lucas pseudoprime test. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## Reservoir Sampling

### January 31, 2014

If you want to choose a sample of k out of n items, and the n items are stored in an array, choosing the sample is easy: just generate k random numbers from 0 to n−1, without duplicates, and look up the associated array items. But that’s harder to do if the items are stored in a list, because indexing into a list is O(n) instead of O(1), and it’s impossible if the list is too large to fit in memory. The Standard Prelude includes a function that works when k = 1, but in the general case we must use a streaming algorithm called reservoir sampling.

The algorithm creates an array of length k, the reservoir, and initializes it with the first k elements of the input list. Then the remaining nk elements of list are each considered, one by one, in order. When the ith element of the input list is reached, a random integer j less than i is generated; if j < k, the jth element of the reservoir array is replaced by the ith element of the list. When all elements of the list have been considered, the items that remain in the reservoir array are returned as the k randomly-selected elements of the input list.

Your task is to write a program that performs reservoir sampling. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## Hart’s One-Line Factoring Algorithm

### January 28, 2014

William Hart wrote, in one line of PARI, a function that factors integers. His method, which he gives in the form of a program, is a variant of Fermat’s method:

ONELINEFACTOR(n,iter)
1    for i ← 1 … iter do
2        s ← ⌈√ni
3        ms2 (mod n)
4        if ISSQUARE(m) then
5            t ← √m
6            return GCD(n, st)
7        endif
8    endfor

Hart’s paper, which you should read, gives several optimizations. The algorithm works best when the two factors are near each other, so you should first perform trial division to n1/3, thus ensuring that there is a factor between n1/3 and n1/2, unless n is prime. The algorithm works well for semi-primes up to 1016 or 1018, and Hart claims it is faster than SQUFOF in that range.

Your task is to write a function that factors integers by Hart’s method. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## Factoring Factorials

### January 24, 2014

Today’s exercise sounds, from the title, like another exercise involving prime numbers and integer factorization, and it is, but it’s really a puzzle from the realm of recreational mathematics: Given a positive integer n, compute the prime factorization, including multiplicities, of n! = 1 · 2 · … · n. You should be able to handle very large n, which means that you should not compute the factorial before computing the factors, as the intermediate result will be extremely large.

Your task is to write the function described above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## Lucas Sequences, Revisited

### January 21, 2014

Lucas sequences are important both to the study of prime numbers and in cryptography, and we’ve used Lucas sequences in several previous exercises, most recently just a few months ago; unfortunately, some of the code given in some of those exercises was incorrect, and looking back, I’m not entirely sure about some of the code that I think is correct. In today’s exercise we will describe Lucas sequences, write correct code, and demonstrate its correctness by thorough testing.

Given integers P and Q satisfying the equation D = P2 − 4 Q > 0, the roots of x2P x + Q = 0, by the quadratic equation from high school, are a = (P + √D) ÷ 2 and b = (P − √D) ÷ 2, so a + b = P, a × b = (P2D) ÷ 4 = Q and ab = √D. Now define Un(P,Q) = (anbn) ÷ (ab) and Vn(P,Q) = an + bn for integer n ≥ 0. The sequences U(P,Q) and V(P,Q) for n from 1 to infinity are called Lucas sequences. Many of these sequences are well known; for instance, the sequence U(1,-1) forms the Fibonacci numbers, the sequence V(1,-1) forms the Lucas numbers (not to be confused with Lucas sequences), and the sequence U(3,2) forms the Mersenne numbers 2n − 1. Lucas sequences are named for the French mathematician Édouard Lucas, who was instrumental in their development in the second half of the nineteenth century.

A Lucas sequence can be calculated in a manner similar to the Fibonacci sequence; Xn = P Xn-1Q Xn-2. Here X takes the place of either U or V. For the U sequence, U0 = 0 and U1 = 1, and for the V sequence, V0 = 2 and V1 = P.

Any element of a Lucas sequence can be computed in logarithmic time by the application of the following identities: U2n = Un Vn, U2n+1 = Un+1 VnQn, V2n = Vn2 − 2 Qn, and V2n+1 = Vn+1 VnP Qn. The identities are combined in a Lucas chain that contains pairs of adjacent elements of the sequences. For instance, starting from the pair (0,1), it is possible to reach the pair (100,101) in seven steps, thereby computing the 100th elements of the sequences: (0,1) → (1,2) → (3,4) → (6,7) → (12,13) → (25,25) → (50,51) → (100,101). Each step in the chain is determined by the binary representation of the desired index n: 1-bits cause a step from (n,n+1) to (2n+1,2n+2) and 0-bits cause a step from (n,n+1) to (2n,2n+1). Note that computing a U sequence requires computation of the corresponding V sequence, but a V sequence can be computed by itself without the corresponding U sequence.

That method works left-to-right across the bits of n, from most-significant bit to least-significant bit, but it is more natural to work right-to-left, because at each step the last bit determines if n is even or odd and moving to the next bit is simply division by 2. Compute members of the U and V sequences at powers of 2, and add them to an accumulating total only when the corresponding bit is 1; the doubling identities were given above, and the addition identities are U(m+n) = (Um · Vn + Un · Vm) / 2 and V(m+n) = (Vm · Vn + D · Um · Un) / 2, where D = P2 − 4 Q. Division by 2 is problematic when doing modular arithmetic, so if the current value is odd, first add the modulus before dividing by 2; there is no need to multiply by the inverse. The U and V accumulators are initialized to U = 0 and V = 2 if n is even and U = 1 and V = P if n is odd, then iteration starts at the second least significant bit, since the least significant bit was consumed in the initialization.

Your task is to write functions that compute Lucas sequences and individual members of Lucas sequences, and write a program that tests the functions thoroughly. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## Shuffle Box

### January 17, 2014

In the previous exercise we implemented the minimum standard random number generator of Park and Miller. In today’s exercise we implement a similar random number generator, due to Knuth, and show how both random number generators can be improved.

The Park-Miller random number generator is a linear congruential generator of the form Xn+1 = a · Xn (mod m). A disadvantage of this type of random number generator is that it cannot produce a value of zero, since then all future random numbers will also be zero. Although we didn’t mention it, that caused some problems in the generators of the prior exercise, requiring careful code and proper parameters to prevent.

Knuth’s random number generator is a mixed linear congruential generator of the form Xn+1 = a · Xn + c (mod m). Because a constant is added after each multiplication, zero becomes a valid random number. Knuth recommends a = 69069, c = 1234567 and m = 232 for 32-bit arithmetic and a = 6364136223846793005, c = 1442695040888963407 and m = 264 for 64-bit arithmetic, though some people set c = 1.

In addition to regularizing zero, Knuth’s mixed linear congruential random number generator has the advantage of being very fast, at least in languages with fixed-length integer datatypes, because the modulo operation is never performed; the arithmetic just wraps around the fixed length of the integer. Thus, assuming the seed is an unsigned long integer, a C version of Knuth’s generator is just

unsigned long integer knuth() {     return ( seed = 69069 * seed + 1234567 ); }

Very simple. Very fast.

One disadvantage of the Knuth random number generator is that the least-significant bit cycles with period 2, the second-least-significant bit cycles with period 4, and so on, because the modulus is a power of two. And both the Park-Miller and Knuth generators suffer from the problem of serial correlation. In many cases this doesn’t matter. But a simple technique called a shuffle box improves the randomness of both the Park-Miller and Knuth generators.

A shuffle box consists of an array of the k most recent seeds, initialized by the first k random numbers following the initial seed. Each time a new random number is requested, an element of the array j is selected by j = ⌊ k · seed / m ⌋, the jth element of the array is returned as the random number, and the next number in the random sequence is stored in the jth element of the array. K is often a power of two, because then j can be quickly computed by a right shift, though it isn’t necessary. This process is described by Knuth in Algorithm 3.2.2B.

Your task is to write a basic version of Knuth’s random number generator, then write versions of both the Park-Miller and Knuth random number generators that use shuffle boxes. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## Minimum Standard Random Number Generator

### January 14, 2014

In 1988, Stephen K. Park and Keith W. Miller, reacting to the plethora of unsatisfactory random number generators then available, published a linear congruential random number generator that they claimed should be the “minimum standard” for an acceptable random number generator. Twenty-five years later, the situation is only somewhat better, and embarrassingly bad random number generators are still more common than they ought to be. Today’s exercise implements the original minimum standard random number generator in several different forms.

We begin with the original minimum standard random number generator. Given a random integer xn, the next random integer in a random sequence is given by computing xn+1 = a xn (mod m), where a = 75 = 16807 and m = 231 − 1 = 2147483647; as a check on your implementation, if x0 = 1, then x10000 = 1043618065. Park and Miller chose m as the largest Mersenne prime less than 232; the smallest primitive root of m is 7, and since 5 is also prime, 75 is also a primitive root, hence their choice of a. Because a is a primitive root of m, all values in the range 1 to m − 1 inclusive will be generated before any repeat, so the random number generator has full period. The multiplier a = 16807 has been shown to have good randomness properties. Subsequent to their original paper, Park and Miller recommended 48271 as an improvement, and some people use 69621, but we’ll continue to use 16807.

The easiest way to implement that is obvious: just multiply a by the current value of x and compute the modulus. But that may cause overflow in the intermediate multiplication, rendering the results incorrect. A trick of Linus Schrage allows that multiplication to be done without overflow: Compute q = ⌊m / a⌋ and r = m (mod a) so that m = a q + r. Then a new x can be computed by hi = ⌊x / q⌋, lo = x (mod q), x = a · lor · hi, then adding m to x if x ≤ 0. In the case a = 16807 and m = 2147483647, q = 127773 and r = 2836; if you use a = 48271, then q = 44488 and r = 3399.

That works properly, without overflow, but each step in the sequence requires two divisions, two multiplications, a subtraction, a comparison, and possibly an addition, which can be slow. David Carta found a clever way to make that computation without any divisions: The product a · x is typically a 46-bit number. If we set q to the 31 low-order bits and p to the 15 high-order bits, then the product is q + p · 0×80000000, which is equivalent to q + p · 0x7FFFFFFF + p. But since we are working mod 0x7FFFFFFF, we can ignore the middle term and just take q + p. That 32-bit value might be larger tham m, in which case we can just subtract m to get it in range.

Your task is to implement all three versions of the minimum standard random number generator. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## The Divisor Function Sigma

### January 10, 2014

In number theory, the divisor function σx(n) is the sum of the xth powers of the divisors of n. For instance, the divisors of 60 are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30 and 60, so σ0(60) = 12 (the count of the divisors, sometimes called τ(n)), σ1(60) = 168 (the sum of the divisors, sometimes called σ(n) without the subscript), and σ2(60) = 5460. The divisor function is fundamental to much of number theory, and there are many identities and formulas that make use of it, as in the MathWorld article. To calculate the divisor function σx(n), first calculate the prime factorization $n = p_1^{a_1} \cdot p_2^{a_2} \cdot \ldots \cdot p_k^{a_k}$, then $\sigma_x(n) = 1$ if $n = 1$ (regardless of the value of x), $\sigma_x(n) = \prod_{i=1}^k (a_i + 1)$ if $x = 0$ and $\sigma_x(n) = \prod_{i=1}^k \frac{p_i^{(a_i+1)x}-1}{p_i^x-1}$ if $x > 0$.

The summatory divisor function calculates the sum of the values of the divisor function from 1 to n; that is $\sum_{d=1}^n \sigma_x(n)$. It could be calculated by summing the values of each of the divisor functions from 1 to n, but a better approach considers for each divisor how many times it appears in the summation: divisor 1 appears for each number from 1 to n, divisor 2 appears for half of them, divisor 3 appears for a third of them, and divisor d appears in ⌊n / d⌋ of them. Arranged in this way, it is easy to calculate the summatory divisor function in linear time.

Your task is to write the divisor and summatory-divisor functions described above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

## Counting Zeros

### January 7, 2014

I’m not sure of the original source of today’s exercise; it feels like a Project Euler exercise — it’s mathy, it admits a simple but slow brute-force solution, and it also has a smart, fast solution — but I can’t find it there.

For a given n, count the total number of zeros in the decimal representation of the positive integers less than or equal to n. For instance, there are 11 zeros in the decimal representations of the numbers less than or equal to 100, in the numbers 10, 20, 30, 40, 50, 60, 70, 80, 90, and twice in the number 100. Expect n to be as large as 1010,000.

Your task is to write a program to count zeros. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2