Passover
March 30, 2010
The Jewish holiday of Passover celebrates the biblical event when, during the Jewish captivity in Egypt, God killed the first-born child of every person and cattle in Egypt, except in those homes whose doorposts were marked with the blood of a spring lamb, which were passed over; as a result, the Jewish people were able to escape enslavement in Egypt.
Passover is celebrated on the date of the first full moon after the spring equinox, with some adjustments. British mathematician John Horton Conway gives this algorithm for calculating the date of Passover:
First, calculate the date of Rosh Hashanah, the Jewish new year. In gregorian year y of the common era, Rosh Hashanah falls on September n, where:
g = remainder(y/19) + 1
n + fraction =
( floor(y/100) - floor(y/400) - 2)
+ 765433/492480 × remainder(12g/19)
+ remainder(y/4) / 4
- (313y + 89081) / 98496
The calculation of n is subject to the following postponement rules:
- If the day calculated above is a Sunday, Wednesday, or Friday, Rosh Hashanah falls on the next day (i.e., Monday, Thursday or Saturday, respectively).
- If the day calculated above is a Monday, and if the fraction is greater than or equal to 23269/25920, and if
remainder(12g/19)
is greater than 11, Rosh Hashanah falls on the next day, a Tuesday. - If the day calculated above is a Tuesday, and if the fraction is greater than or equal to 1367/2160, and if
remainder(12g/19)
is greater than 11, Rosh Hashanah falls two days later, on Thursday.
Given the date of Rosh Hashanah, the date of Passover in the same calendar year is calculated as m days after March 21, where m is the day of September on which Rosh Hashanah falls, if Rosh Hashanah is in September, or thirty plus the day of October on which Rosh Hashanah falls, if Rosh Hashanah is in October.
According to Jewish custom, the holiday actually begins at sunset on the day prior to that calculated above.
Your task is to write functions that calculate the dates of Rosh Hashanah and Passover for any given calendar year. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
The Next Prime
March 26, 2010
In two previous exercises, we had to iterate through the prime numbers. In one case, we generated a large number of primes using the Sieve of Eratosthenes, but without knowing in advance how large the sieve needed to be, and in the other case we iterated through the odd integers, checking the primality of each. Both solutions were less than attractive. In consideration of the old rule that if you do something twice you ought to build it into an abstraction, we will today write a function that, given a positive integer n, returns the smallest prime number greater than n.
Our method is to pre-compute a large number of primes and store them on disk. If n is within the bounds of the pre-computed list, it is easy to find the next prime. But if n is too large, we revert to checking individual candidates for primality. For our example we will pre-compute the primes to a million, but depending on your aspirations and your memory budget, you could adjust that number as desired.
To save memory space, we will store the pre-computed primes in a compressed data structure. Every prime number can be expressed as 30k±1, 30k±7, 30k±11, or 30k±13 for some k. That means we can use eight bits per thirty numbers to store all the primes; a million primes can be compressed to 33,334 bytes, plus a small program to load the compressed primes from disk and to manipulate the compressed data structure.
Your task is to write a function that builds the compressed data structure described above, a second function that loads it from disk to memory, and a third function that uses the compressed data structure to calculate the next prime. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Texas Hold ‘Em
March 23, 2010
Texas Hold ‘Em is a variant of poker. After the ante, two cards are dealt face down to each player and a round of betting ensues. Then three community cards (the flop), which may be used by any player, are dealt, followed by another round of betting. Then a single community card (the turn) is dealt, followed by a round of betting, followed by the final community card (the river) with its round of betting. In total, each player has two concealed cards plus five community cards, and the player with the best five-card hand of any of the seven cards available to him is the winner, claiming the pot. Poker hands are ranked, from highest to lowest, according to the following rules:
- Straight flush: Five cards in sequence, all of the same suit. The ace can be played either high or low. Between two straight flushes, the highest-ranking card wins; two straight flushes with the same sequence tie.
- Four of a kind: Four cards of the same rank, plus one unmatched card. Highest-ranking quads beat lower-ranking quads; if both are the same, the highest-ranking kicker wins, otherwise there is a tie.
- Full house: Three matching cards of one rank, and two matching cards of another. The hand with the highest-ranking triplet wins; if both are the same, the highest-ranking pair wins, otherwise there is a tie.
- Flush: Five cards of the same suit. Two flushes are compared by the ranks of their cards; the highest-ranking card wins, in the event of a tie the second highest-ranking card wins, and so on until a difference is found. If both flushes have cards of the same rank, they tie.
- Straight: Five cards in sequence. The ace may be used either high or low. Two straights are ranked by their highest cards; if their highest cards are equal, they tie.
- Three of a kind: Three cards of the same rank, plus two unmatched cards. The highest-ranking triplet wins; if they are the same, the kickers are compared to break the tie.
- Two pair: Two cards with matched rank, plus two more cards of matched rank, plus one unmatched card. Two-pair hands are ranked on the highest pair, then the lowest pair, and finally the kicker.
- One pair: Two cards with matched rank, plus three more unmatched cards. The highest-ranking pair wins, with the three kickers used to break ties.
- High card: Five unmatched cards. Hands are ranked on their highest card, followed by their second-highest card, and so on.
Your task is to write a program that selects the best five-card hand from a list of seven cards. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Extending Pollard’s P-1 Factorization Algorithm
March 19, 2010
We studied John Pollard’s p-1 factorization algorithm in a previous exercise. You may recall that the algorithm finds factors of a number n by calculating the least common multiple of the integers up to some bound B, call it k, then calculates the greatest common divisor of 2k-1 and n; if the greatest common divisor is between 1 and n, it is a factor of n.
What is happening mathematically is that we are trying to find a factor p|n (that’s “p divides n“, meaning that p is a factor of n, for those not familiar with the mathematical notation), for which we know the factorization of p-1. Consider the number 15770708441 = 135979 × 115979. If we apply Pollard’s p-1 algorithm with a bound of 150, no factors are found, but if we apply Pollard’s p-1 algorithm with a bound of 180 the 135979 factor is found, because 135979 – 1 = 2 × 3 × 131 × 173; increasing the bound to include the factor 173 makes Pollard’s p-1 algorithm work. The 135979 factor is found first because 115979 – 1 = 2 × 103 × 563, and 563 is out-of-bounds.
An alternative to increasing the bound is to call a second stage that looks for a p-1 for which all factors are less than the first-stage bound except one factor that is between the first-stage bound and the second-stage bound. That is, instead of calculating lcm(1..180), we calculate lcm(1..150) × j, where j ranges from 151 to 180. For small numbers like 150 and 180, the difference doesn’t matter, but for larger numbers like B1 = 106 and B2 = 109, the difference in the computational cost is noticeable.
Your task is to write a two-stage version of Pollard’s p-1 algorithm. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Traveling Salesman: Nearest Neighbor
March 16, 2010
We saw in the previous exercise that finding an exact solution for the traveling salesman problem is extremely time consuming, taking time O(n!). The alternative is a heuristic that delivers a reasonably good solution quickly. One such heuristic is the “nearest neighbor:” pick a starting point, then at each step pick the nearest unvisited point, add it to the current tour and mark it visited, repeating until there are no unvisited points.
Your task is to write a program that solves the traveling salesman problem using the nearest neighbor heuristic. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Traveling Salesman: Brute Force
March 12, 2010
The traveling salesman problem is classic: Find the minimum-length tour that visits each city on a map exactly once, returning to the origin. It is an important problem in practice; consider, for instance, that the cities are soldering points on a large circuit board, each of which must be visited by a soldering robot. It is also an important problem in mathematics, and is known to be in NP, meaning that optimal solutions for non-trivial problems are impossible. Nonetheless, there exist heuristic algorithms that do a good job at minimizing the tour length.
We will examine the traveling salesman problem in an occasional series of exercise, beginning with today’s exercise that looks at the brute-force solution of examining all possible tours. The algorithm is simple: First, select a tour at random, then determine all possible permutations of the tour. Second, determine the length of each tour. Third, report the tour with minimum length as the solution.
Your task is to write a program that solves the traveling salesman problem using the brute-force algorithm. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Lexicographic Permutations
March 9, 2010
Edsger Dijkstra, in his book A Discipline of Programming, describes a function that, given a list of elements and a function that returns a total ordering of those elements, produces the next lexicographic permutation of the list; for instance, the permutations of the list (1 2 3 4) in lexicographic order are (1 2 3 4) (2 1 3 4) (1 3 2 4) (3 1 2 4) (2 3 1 4) (3 2 1 4) (1 2 4 3) (2 1 4 3) (1 4 2 3) (4 1 2 3) (2 4 1 3) (4 2 1 3) (1 3 4 2) (3 1 4 2) (1 4 3 2) (4 1 3 2) (3 4 1 2) (4 3 1 2) (2 3 4 1) (3 2 4 1) (2 4 3 1) (4 2 3 1) (3 4 2 1) (4 3 2 1). With the least significant item in the list at the head, the next greater element than (1 2 3 4) is (2 1 3 4), since the most significant tail of the list doesn’t change, and the next greater permutation differs in the least significant elements. To make a greater permutation, some element of the list must be replaced by a larger element to its left; to make the next permutation, the replacement must happen as far to the left as possible, in the least significant position, and the replacing element must be as small as possible.
Your task is to write functions that return the next lexicographic permutation and a list of all permutations of a list. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Binary Search Tree
March 5, 2010
Binary search trees are a simple and commonly-used data structure. A binary search tree is a hierarchical data structure in which each node stores a key, a value, and pointers to two children. Each node has one parent, except the node at the root of the tree, which has no parents. At each node, the key is greater than or equal to the keys of all nodes in its left child and less than or equal to the keys of all nodes in its right child. Any node may be null; a null node has no key, no value, and no children. A sample binary search tree is shown at the right.
Traversing the tree is fairly simple. Start at the root, comparing the key at the current node to the key being sought. If the target key is less than the current key, repeat the search at the left child; likewise, if the target key is greater than the current key, repeat the search at the right child. If the target key and current key are the same, you’ve found the desired node; if you reach a null node, the target key is not present in the tree. The insert and delete operations maintain the in-order property at each node.
Your task is to write functions that manipulate binary search trees; you should include lookup, insert, delete, and enlist functions in your library. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Goldbach’s Conjecture
March 2, 2010
Christian Goldbach (1690-1764) was a Prussian mathematician and contemporary of Euler. One of the most famous unproven conjectures in number theory is known as Goldbach’s Conjecture, which states that every even number greater than two is the sum of two prime numbers; for example, 28 = 5 + 23.
Your task is to write a function that finds the two primes that add to a given even number greater than two. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.