Lucky Numbers

March 18, 2014

We calculated Josephus numbers in a previous exercise. In today’s exercise, we study Lucky Numbers, which are those positive integers that survive the Sieve of Josephus. It works like this:

Start with the numbers 1 through n, where n is the desired limit of the sieve; we’ll illustrate with n = 20: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20. At the first step, remove every second number from the sieve, starting from the 0’th: this leaves 1, 3, 5, 7, 9, 11, 13, 15, 17, 19. At the second step, the second number in the list is 3, so remove every third number from the sieve, starting from the 0’th: this leaves 1, 3, 7, 9, 13, 15, 19. At the third step, the third number in the list is 7, so remove every seventh number from the sieve, starting from the 0’th: this leaves 1, 3, 7, 9, 13, 15. And so on. The lucky numbers are listed at A000959: 1, 3, 7, 9, 13, 15, 21, 25, 31, 33, 37, 43, 49, 51, 63, 67, 69, 73, 75, 79, 87, 93, 99, 105, 111, 115, 127, 129, 133, 135, 141, 151, 159, 163, 169, 171, 189, 193, 195, 201, 205, 211, 219, 223, 231, 235, 237, 241, 259, 261, 267, 273, 283, 285, 289, 297, 303, …. Lucky numbers share many properties with prime numbers, including the same asymptotic behavior as the primes: any given number n is lucky with probability 1 / loge n, the same as any given number n is prime with probability 1 / loge n.

Your task is to write a program that lists the lucky numbers less than n. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Root Finding

March 14, 2014

Finding the roots of an equation is an important operation in mathematics, and there are many algorithms for finding roots numerically. We will study two of those algorithms in today’s exercise, and perhaps we will look at other algorithms in future exercises.

Let’s begin with the rules of the game. We are given a univariate function f(x) and want to find one or more values of x such that f(x) = 0. The function could be a polynomial, or could use trigonometric or exponential functions, or integrals, or any other mathematical operation.

The bisection algorithm starts with two points lo and hi that bound the root; thus, one of f(lo) and f(hi) is positive and the other is negative. The algorithm is iterative; at each step, the midpoint mid = (lo + hi) / 2 is found, the function f(mid) is evaluated at the midpoint, and then it replaces either lo or hi, whichever has the same sign. Iteration stops if f(mid) = 0 or if the difference between lo and hi is sufficiently small.

The regula falsi method is similar, but instead of calculating the center midpoint it calculates the midpoint at the point where a line connecting the current lo and hi crosses the axis. The method dates to the ancient Egyptians and Babylonians.

Both methods work only when the function f is continuous, with no discontinuities between lo and hi.

Your task is to write functions that find roots by the bisection and regula falsi methods. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Caesar Cipher

March 11, 2014

I was asked about a caesar cipher the other day, and suggested that my questioner search for caesar cipher here at Programming Praxis. To my surprise, I discovered that we have never done a caesar cipher (we did the ROT13 cipher, but not a general-purpose caesar cipher), so I apologized to my friend and wrote today’s exercise.

A caeser cipher, named after Julius Caesar, who either invented the cipher or was an early user of it, is a simple substitution cipher in which letters are substituted at a fixed distance along the alphabet, which cycles; children’s magic decoder rings implement a caesar cipher. Non-alphabetic characters are passed unchanged. For instance, the plaintext PROGRAMMINGPRAXIS is rendered as the ciphertext SURJUDPPLQJSUDALV with a shift of 3 positions.

Your task is to write functions that encipher and decipher text using a caesar cipher. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

We have seen the n−1 primality prover in one previous exercise and the n+1 primality prover in another previous exercise. In today’s exercise we combine the two tests. There are two versions, one using a factoring bound and one without. The basic idea of both provers is to express n − 1 = f1 * r1 and n + 1 = f2 * r2 with r1 and r2 odd, gcd(f1, r1) = gcd(f2, r2) = 1 and the prime factorizations of f1 and f2 known, then perform various tests if the partial factorizations are sufficient. This is often called the BLS primality prover because the 1975 paper by John Brillhart, Derrick Lehmer and John Selfridge proves all the necessary theorems.

For the unbounded version of the test, factor n−1 and n+1 by any convenient method until max(f1*f1*f2/2, f1*f2*f2/2) > n. Then n is prime if all the factors of f1 and f2 are prime and the following conditions hold:

Condition 1: For each prime p dividing f1 there is an integer a such that a^(n−1) = 1 (mod n) and gcd(a^((n−1)/p)−1, n) = 1. A different a may be used for each prime p.

Condition 2: For each prime p there is a Lucas sequence U(P,Q) with discriminant D = P^2 – 4Q 0 and the jacobi symbol (D/N) = −1 such that U(n+1) = 0 (mod n) and gcd(U((n+1)/p), n) = 1. The same discriminant D must be used for each prime p, but P and Q may vary. Given P and Q, a new P‘ and Q‘ with the same D can be calculated as P‘ = P + 2 and Q‘ = P + Q + 1.

The bounded version is similar. Factor n−1 and n+1 by trial division until max(b*f1+1, b*f2−1) * (b*b*f1*f2/2 + 1) > n, where b is the smallest number that is known to be a factor of neither r1 nor r2; if you’re clever, you can perform the two trial divisions simultaneously, looking for remainders of either 0 or 2 when dividing n+1 by the current prime. Then n is prime if Condition 1 and Condition 2 and the following additional conditions hold:

Condition 3: There is an integer a such that a^(n−1) = 1 (mod n) and gcd(a^((n−1)/R1)-1, n) = 1.

Condition 4: There is a Lucas sequence U(P,Q) with the same discriminant D as in Condition 2 above such that U(n+1) = 0 (mod n) and gcd(U((n+1)/R2), n) = 1.

In practice, it is common to write a single program that includes both versions. First perform trial division up to some convenient limit, using the bounded formula to determine if you have enough factors. If that is unsuccessful, use some other method (Pollard rho or elliptic curves work well, because they are good at finding small factors of large numbers) to find more factors, recursively prove any such factors prime, and continue until you have enough.

Your task is to implement the BLS primality prover. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Learn A New Language

March 4, 2014

In today’s exercise you are challenged to write a program in a language you’ve never used before. We’ve done this before, and it’s fun; the idea is to get you out of your comfort zone, so you’re thinking about programming, not just blindly following habit.

Your task is to write a program in a language you’ve never used before. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Profiling

February 28, 2014

I was reading Jon Bentley’s book More Programming Pearls the other day; it’s a superb book, and I re-read it every few years. In the first chapter Bentley uses a profiler to improve a small function to identify prime numbers by trial division; in today’s exercise, we will do the same.

Your task is to use a profiler to improve a program you have written; depending on the facilities provided by your language, you may have to write your own profiler. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Crossing Hands

February 25, 2014

The hands of an analog clock occasionally cross as they revolve around the dial.

Your task is to write a progam that determines how many times the hands cross in one twelve-hour period, and compute a list of those times. When you are finisehd, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Anagrams Within Words

February 21, 2014

We have a little programming puzzle today:

Given two words, determine if the first word, or any anagram of it, appears in consecutive characters of the second word. For instance, cat appears as an anagram in the first three letters of actor, but car does not appear as an anagram in actor even though all the letters of car appear in actor.

Your task is to write a function to determine if an anagram is present in a word. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Two Interview Questions

February 18, 2014

It’s been a while since we had any interview questions. We have two today, since they are so simple:

1) Given a number n, find the smallest 3-digit number such that the product of its digits is equal to n. For example, given n = 100, the solution is 455.

2) Given two arrays, one with n items and one with n+2 items including the same items as the array with n items plus two additional items, find the two additional items. Assume none of the items are duplicates.

Your task is to write solutions to the two interview questions given above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Reservoir Sampling, Improved

February 14, 2014

[ Today's exercise was written by guest author Paul Hofstra, who holds a PhD in Theoretical Nuclear Physics from the Free University in Amsterdam and is now retired from a major oil company. Guest authors are always welcome; contact me at the link in the menu bar if you are interested in writing for Programming Praxis. ]

Reservoir sampling is a method to get a random sample of size n from a stream of a priori unknown (and possibly very large) size. Jeffrey Scott Vitter gives several interesting algorithms for solving this problem; the algorithm that we used in a recent exercise is his Algorithm R, and he also gives Algorithm X and Algorithm Z. Algorithm R begins by placing the first n input items in an array called the reservoir, then for each succeeding input item chooses a random number that determines if the new item should replace one of the items in the reservoir. Algorithm X and Algorithm Z are faster because, instead of generating a random number for each input item, they use the random numbers to determine how many input items to skip; in the algorithms stated below, s is the number of items to skip and t is the number of items already processed. The expected number of skipped items is t / (n − 1) − 1. The probability function for the number of skipped records is f(s) = (n / (tn)) · ((tn)s+1) / (t + 1)s+1) and the distribution function for the probability Ss is F(s) = 1 − ((t + 1 − n) s+1) / (t + 1)s+1), where ab = a (a + 1) … (a + b − 1).

For Algorithm X, generate a uniform random number r between 0 and 1 and determine S as the lowest value for which F(S) ≥ r. Then skip S records, swap the next record with a record in the reservoir, and continue until the input is exhausted.

Algorithm Z improves Algorithm X by rejection sampling. Two new functions g and h and a constant c are defined so that h(s) ≤ f(s) ≤ c g(s), where c does not depend on s. The functions are as close to f as possible, and the integral of g(G) has a simple inverse, making h(s) much faster to calculate than f(s). Now calculate X = G−1(random) and S = ⌊X⌋. The rejection method rejects s if a random number r > f(S) / (c g(X)). To avoid the expensive calculation of f(S), r is first compared to h(S) / (c g(X)), and S is accepted if r is smaller; otherwise, f(S) must be calculated and compared. The calculations of h, g and c are: h(s) = n / (t + 1) · ((tn + 1) / (t + sn + 1))n+1, g(s) = n / (t + s) · (t / (t + s))n, G−1(y) = t((1 − y)−1/n − 1), and c = (t + 1) / (tn + 1). Since this is fastest when t is large, Vitter recommends using Algorithm X when tT · n with T = 22.

Your task is to write functions to implement Algorithm X and Algorithm Z. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Follow

Get every new post delivered to your Inbox.

Join 616 other followers