A Median-Of-Three Killer Sequence
November 8, 2016
In two previous exercises we’ve been working toward a variant of quicksort that has guaranteed O(n log n) performance; there is no quadratic worst case. Before we do that, however, it is instructive to look at the case where our optimized median-of-three version of quicksort fails. Consider this sequence, due to David Musser:
1 11 3 13 5 15 7 17 9 19 2 4 6 8 10 12 14 16 18 20
At the first partitioning, the pivot element will be the median of 1, 2 and 20, which is 2, and the only two elements that change will be 2 and 11, with the partition point after the 2, indicated by the vertical bar:
1 2 | 3 13 5 15 7 17 9 19 11 4 6 8 10 12 14 16 18 20
At the next step, the pivot element will be the median of 3, 4 and 20, which is 4, and again the partition will advance only by two:
1 2 3 4 | 5 15 7 17 9 19 11 13 6 8 10 12 14 16 18 20
And so on. Each partition contributes the least possible amount toward the solution, and the time complexity becomes quadratic.
Your task is to write a program that creates a “killer sequence” for the median-of-three partition, then compare its time to the time required for sorting a random partition. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
I hope this is leading up to introsort, which has the interesting property that its performance is bounded below by n log n, and yet it is not deterministic like mergesort.
Something odd there – the “killer” sequence should result in quadratic performance & a massive slowdown rather than just a factor of 6.
I think the reason is that your partition functions don’t position the pivot element between the two partitions as the killer sequence expects.
Here’s another version (this is based on Knuth’s description of Sedgewick’s modification of Dijkstra’s redevelopment of Hoare’s original, from “Structured programming with go to statements”, which has a very interesting discussion of partitioning, amongst much else – Hoare’s original paper is worth a read too – he points out, for example, that not moving the pivot to its final position makes proving termination harder (as we might end up with one empty partition and one the same size as the original array – I don’t think this can happen with the suggested algorithm, but it’s not obvious).
Note that the pivot (which is moved to the end of the array) and the first element (which is less that or equal to the pivot) act as sentinels for the first iteration through the loop – the median calculation sets this up nicely, so we still don’t need to check array bounds when moving the pointers.
Here’s a solution in C99.
Based on the killer sequences in the problem, It seems that the problem assumes that median-of-three is calculated from the first element, the middle element, and the last element in an array. My quicksort from the corresponding previous problem calculated median from the first three elements in an array. I’ve modified my earlier code to calculate medians in the same way assumed by this problem.
Compiler optimizations were disabled. I had to increase the stack size to prevent stack overflows.
There are two sections to the output. The first shows example median-of-three killer sequences, generated by init_median_of_three_killer. The second section shows runtimes for sorting random arrays versus killer arrays. Each experiment was conducted with 10 separate sorts, and the time reported is the aggregate time for all 10 sorts. Rows correspond to various array sizes.
A random array sorts faster than a killer array with the same size. However, sorting either type appears to be O(n^2). For both array types, doubling array size roughly quadruples the time to sort, suggesting O(n^2). I understand why this is the case for killer arrays, but am not sure why the algorithm scales this way for random arrays. Do others experience similar behavior? Is there a reason to expect this? It seems to indicate a bug in my code, but I couldn’t find one (although I didn’t test thoroughly).
Output:
> “It seems to indicate a bug in my code, but I couldn’t find one (although I didn’t test thoroughly).”
I found the bug. My declaration for the results array had a problem.
Before: double results[max_order+1][2];
After: double results[2][max_order+1];
The corrected time comparisons are below. As expected, the updated output no longer suggests that sorting random arrays is O(n^2), as opposed to my earlier problematic results.
[…] sorting algorithm that we have been working up to in three previous exercises is introspective sort, or introsort, invented by David Musser in 1997 for the C++ Standard Library. […]