Master Mind, Part 2

November 20, 2009

In the previous exercise we wrote a Master Mind setter. In today’s exercise we will write a solver, using an algorithm given by Donald E. Knuth in his article “The Computer As Master Mind” in Volume 9, Number 1, the 1976-1977 edition, of The Journal of Recreational Mathematics.

The central concept of Knuth’s algorithm is the pool of potential solutions. His algorithm chooses at each step a probe that minimizes the maximum number of remaining possibilities over all possible response of the codebreaker; in the event of a tie, any pattern that achieves the minimum may be used, subject to the condition that a probe that is a member of the current pool is preferred to one that is not.

For instance, consider the pool of 1296 possible code words at the start of a puzzle. There are essentially five possible starting probes: 1 1 1 1, 1 1 1 2, 1 1 2 2, 1 1 2 3, and 1 2 3 4 (rotations are excluded, as are variants that substitute one symbol consistently for another). The remaining pool sizes after each of the five probes is applied to all of the 1296 possible code words are given in the table below:

  1111 1112 1122 1123 1234
.... 625 256 256 81 16
W... 0 308 256 276 152
B... 500 317 256 182 108
WW.. 0 61 96 222 312
BW.. 0 156 208 230 252
BB.. 150 123 114 105 96
WWW. 0 0 16 44 136
BWW. 0 27 36 84 132
BBW. 0 24 32 40 48
BBB. 20 20 20 20 20
WWWW 0 0 1 2 9
BWWW 0 0 0 4 8
BBWW 0 3 4 5 6
BBBB 1 1 1 1 1
max 625 317 256 276 312

The minimax solution is 256, achieved when the probe is 1 1 2 2, so that should always be the first probe. Then the solution is determined by making the minimax probe, reducing the pool by applying the result of the probe, and repeating on the reduced pool until the puzzle is solved.

Your task is to write a Master Mind solver based on the rules set out in the previous exercise and Knuth’s algorithm given above. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.

Pages: 1 2

Master Mind, Part 1

November 17, 2009

Master Mind is a two-player game of deductive logic. One player, the setter, selects a four-symbol code, and the other player, the solver, tries to identify the code by trying test patterns, probes, to which the setter responds with the number of black hits, indicating the number of positions where the code symbol and probe symbol are identical, and the number of white hits, where a probe has the right symbol in the wrong position. Setter and solver change roles after each puzzle is solved, for a pre-defined number of rounds, and the winner is the player who has solved the puzzles with the least number of probes. With six symbols and four pegs there are 64 = 1296 possible codes. The physical game uses colored pegs for the symbols; we will use digits instead. Variants of the game increase the number of symbols and/or the length of the code; a variant with five pegs using eight colors is marketed under the name Super Master Mind. Here is a sample game:

1 1 2 2    B
1 3 4 4    W
3 5 2 6    BWW
1 4 6 2    BW
3 6 3 2    BBBB

The solver first probes with the pattern 1 1 2 2, which has a single black hit. The second probe, 1 3 4 4, receives a single white hit. The third probe, 3 5 2 6, earns a single black hit and two white hits. The fourth probe, 1 4 6 2, receives one black hit and one white hit. The fifth probe, 3 6 3 2, solves the puzzle with four black hits.

Your task is to write a program that performs the role of the setter, selecting a random code, prompting for probes, and scoring each probe until the human solver who is playing the game solves the puzzle. In the next exercise you will be asked to write a solver. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Two Linear Sorts

November 13, 2009

It is well known that any comparison-based sort, as we have been studying, has a lower time bound of O(n log n). But if all the keys are positive integers less than or equal to n, it is possible to sort in O(n) linear time by taking advantage of the structure of the keys themselves.

Count sort determines, for each input element x, the number of elements less than x, then places x directly in its position in the output; if there are k elements less than x, then x belongs in the kth + 1 position (being careful to properly consider the case of equal elements). Count sort requires two temporary arrays, one to hold the counts of the various elements, which act as indexes into the array, and one to build up the output.

Radix sort extends count sort by making multiple passes based on the positional digits of the integers being sorted: first do a count sort on the digits in the ones column of the integers, then a count sort on the digits in the tens column, then the hundreds column, the thousands column, and so on until the input is sorted, taking advantage of the fact that count sort is stable. Radix sort works on other kinds of keys besides integers; for instance, dates can be sorted by doing count sort on day, month and year in succession.

Your task is to write functions that sort arrays using count sort and radix sort. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Merge Sort

November 10, 2009

The last O(n log n) sorting algorithm that we shall consider in our current series of exercises is merge sort. If you have two sorted sequences, they can be merged into a single sorted sequence in time linear to their combined length by running through them in order, at each step taking the smaller of the heads of the two sequences. Then mergesort works by recursively merging smaller sequences into larger ones, starting with trivially-sorted sequences of one element that are merged into two-element sequences, then merging pairs of two-element sequences into four-element sequences, and so on, until the entire sequence is sorted.

Your task is to write a function that sorts an array by the merge sort algorithm described above, according to the conventions of the prior exercise. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Heap Sort

November 6, 2009

A priority queue is a data structure that permits insertion of a new element and retrieval of its smallest member; we have seen priority queues in two previous exercises. Priority queues permit sorting by inserting elements in random order and retrieving them in sorted order. Heapsort uses the heap data structure to maintain a priority queue. The heap is a tree embedded in an array, with the property that the item at each index i of the array is less than the children at indices 2i and 2i+1.

The key to understanding heapsort is a function we call heapify that gives the sub-array A[i .. n] the heap property if the sub-array A[i+1 .. n] already has the property. Heapify starts at the ith element of the array and swaps each element with its smallest child, repeating the operation at that child, stopping at the end of the array or when the current element is smaller than either of its children. Then heapsort works in two phases; the first phase forms an initial heap by calling heapify on each element of the array from n/2 down to 1, then a second phase extracts the elements in order by repeatedly swapping the first element with the last, re-heaping the sub-array that excludes the last element, and recurring with the smaller sub-array that excludes the last element.

Your task is to write a function that sorts an array using the heapsort algorithm, using the conventions of the prior exercise. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.

Pages: 1 2

Quick Sort

November 3, 2009

Quick sort was invented by Sir Charles Antony Richard “Tony” Hoare, a British computer scientist and winner of the 1980 Turing Award, in 1960, while he was a visiting student at Moscow State University. Though it has an annoying quadratic worst-case performance, quick sort has expected O(n log n) performance and is significantly faster in practice than most other O(n log n) sorting algorithms, and it is possible to arrange that the worst-case performance almost never happens for real-world data.

Quick sort works as follows: First, an element, called the pivot, is chosen from the array. Then the array is partitioned around the pivot by reordering the array so that all elements less than the pivot come before it and all items greater than the pivot come after it; this puts the pivot element in its final position in the array. Finally, quick sort is called recursively on the less-than and greater-than partitions; the base of the recursion is arrays of zero or one element.

There are many ways to choose the pivot element. Some algorithms choose the first element of the array, or the last; others choose the median of three elements (first, middle, last). Our preference is to choose an element at random, since that virtually eliminates the possibility of quadratic performance (unless there is collusion between the random-number generator and the data).

Likewise, there are many ways to perform the partitioning. One approach, due to Nick Lomuto, uses a single pointer to run through the sub-array, swapping the current element for the last element of the sub-array, which is then decremented, if it is greater than the pivot. Another approach uses two pointers that approach each other, swapping elements when the two pointers cross over the pivot element. Lomuto’s partition is simpler; dual pointers is quicker, but the details are hard to get right.

Your task is to write a function that sorts an array using the quick sort algorithm, using the conventions of a previous exercise. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Two Sub-Quadratic Sorts

October 30, 2009

In the previous exercise we looked at three sorting algorithms that work in quadratic time. Today we look at two sorting algorithms, each a minor variant on one of the previous algorithms, that work much more quickly.

Comb sort is a variant of bubble sort popularized by Stephen Lacey and Richard Box in an article in the April 1991 edition of Byte Magazine. The basic idea is to quickly eliminate turtles, small values near the end of the array, since they greatly hamper the speed of the sort. In bubble sort, the elements being compared are always adjacent; the gap between them is 1. In comb sort, the gap is initially the length of the list being sorted; the array is sorted using that gap size, then the gap is reduced and the array is sorted again, and so on until the gap is reduced to 1, when the sort reduces to ordinary bubble sort. Since early stages with large gaps quickly move turtles near the front of the array, later stages with smaller gaps have less work to do, and the sorting algorithm becomes relatively efficient. Most often, the gap is reduced by a factor of 1.3 at each step, though other shrink factors are sometimes used.

In the same way that comb sort is a variant of bubble sort, shell sort, invented by Donald Shell in 1959, is a variant of insertion sort that attempts to eliminate large disorder in early stages so that later stages have less work to do. Shell sort performs multiple stages of insertion sort, using a diminishing sequence of gaps that eventually reaches 1; a popular gap sequence is …, 364, 121, 40, 13, 4, 1.

Your task is to write functions that perform comb sort and shell sort, in the same manner as the previous exercise. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Three Quadratic Sorts

October 27, 2009

Sorting is one of the most common computing tasks. In the days of large mainframes, sorting would often account for ten percent of a computer’s workload, and there were complicated procedures involving large free-standing tape machines for sorting more records than could fit in the computer’s memory; the programmer who could shave a few percentage points of time or core memory space off the standard system sort was a hero. Nowadays, most programmers simply call their local sort library, and never worry about how it works.

We are going to explore classical sorting algorithms in the next several exercises. The rules of the game: We will be sorting arrays of integers with elements stored in locations zero through n−1, where n is the number of elements in the array. We will always sort into ascending order, and will use <, never ≤, to compare array elements. All sorting functions will be called with two parameters, the name of the array and its length.

Today, we will look at three simple sorting algorithms. Bubble sort works by repeatedly stepping through the array to be sorted, comparing each pair of adjacent elements and interchanging them if they are in the wrong order, until the array is sorted. Selection sort works by repeatedly passing through the array, at each pass finding the minimum element of the array, interchanging it with the first element of the array, then repeating on the sub-array that excludes the first element of the array. Insertion sort works the same way that card players generally sort their hands; starting from an empty hand, they pick up a card, insert it into the correct position, then repeat with each new card until no cards remain.

Your task is to write functions that sort an array using bubble sort, selection sort, and insertion sort; you should also write a test program that can be used for any of the sorting algorithms. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

Mr. S. and Mr. P.

October 23, 2009

John McCarthy, who discovered Lisp, attributes this puzzle to Hans Freudenthal:

We pick two numbers a and b, so that 99 ≥ ab ≥ 2. We tell Mr. P. the product a × b and Mr. S. the sum a + b. Then Mr. S. and Mr. P. engage in the following dialog:

Mr. P.: I don’t know the numbers.

Mr. S.: I knew you didn’t know. I don’t know either.

Mr. P.: Now I know the numbers.

Mr. S.: Now I know them too.

Find the numbers a and b.

Your task is to find the two numbers. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.

Pages: 1 2

Shuffle

October 20, 2009

It is easy to shuffle an array by stepping through the array, swapping each element with a forward element (an element at an index greater than or equal to the current element) until the next-to-last element is reached. It is harder to shuffle a linked list, because lists don’t permit ready access to any element other than the first.

Your task is to write functions that shuffle an array and a linked list. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2