## A Median-Of-Three Killer Sequence

### November 8, 2016

In two previous exercises we’ve been working toward a variant of quicksort that has guaranteed O(n log n) performance; there is no quadratic worst case. Before we do that, however, it is instructive to look at the case where our optimized median-of-three version of quicksort fails. Consider this sequence, due to David Musser:

1 11 3 13 5 15 7 17 9 19 2 4 6 8 10 12 14 16 18 20

At the first partitioning, the pivot element will be the median of 1, 2 and 20, which is 2, and the only two elements that change will be 2 and 11, with the partition point after the 2, indicated by the vertical bar:

1 2 | 3 13 5 15 7 17 9 19 11 4 6 8 10 12 14 16 18 20

At the next step, the pivot element will be the median of 3, 4 and 20, which is 4, and again the partition will advance only by two:

1 2 3 4 | 5 15 7 17 9 19 11 13 6 8 10 12 14 16 18 20

And so on. Each partition contributes the least possible amount toward the solution, and the time complexity becomes quadratic.

Your task is to write a program that creates a “killer sequence” for the median-of-three partition, then compare its time to the time required for sorting a random partition. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

### 5 Responses to “A Median-Of-Three Killer Sequence”

1. John Cowan said

I hope this is leading up to introsort, which has the interesting property that its performance is bounded below by n log n, and yet it is not deterministic like mergesort.

2. matthew said

Something odd there – the “killer” sequence should result in quadratic performance & a massive slowdown rather than just a factor of 6.

I think the reason is that your partition functions don’t position the pivot element between the two partitions as the killer sequence expects.

Here’s another version (this is based on Knuth’s description of Sedgewick’s modification of Dijkstra’s redevelopment of Hoare’s original, from “Structured programming with go to statements”, which has a very interesting discussion of partitioning, amongst much else – Hoare’s original paper is worth a read too – he points out, for example, that not moving the pivot to its final position makes proving termination harder (as we might end up with one empty partition and one the same size as the original array – I don’t think this can happen with the suggested algorithm, but it’s not obvious).

Note that the pivot (which is moved to the end of the array) and the first element (which is less that or equal to the pivot) act as sentinels for the first iteration through the loop – the median calculation sets this up nicely, so we still don’t need to check array bounds when moving the pointers.

``` -
int partition(T *a, int n) {
int last = n-1, mid = n/2;
if (a > a[last]) std::swap(a,a[last]);
if (a[last] > a[mid]) std::swap(a[last],a[mid]);
if (a > a[last]) std::swap(a,a[last]);
T pivot = a[last];
int i = 0, j = last;
while(true) {
do i++; while(a[i] < pivot);
do j--; while(a[j] > pivot);
if (i >= j) break;
std::swap(a[i],a[j]);
}
std::swap(a[i],a[last]);
return i;
}

void qsort(T *a, int n) {
if (n > 1) {
int p = partition(a,n);
qsort(a,p);
qsort(a+p+1,n-p-1);
}
}
```
3. Daniel said

Here’s a solution in C99.

Based on the killer sequences in the problem, It seems that the problem assumes that median-of-three is calculated from the first element, the middle element, and the last element in an array. My quicksort from the corresponding previous problem calculated median from the first three elements in an array. I’ve modified my earlier code to calculate medians in the same way assumed by this problem.

Compiler optimizations were disabled. I had to increase the stack size to prevent stack overflows.

There are two sections to the output. The first shows example median-of-three killer sequences, generated by init_median_of_three_killer. The second section shows runtimes for sorting random arrays versus killer arrays. Each experiment was conducted with 10 separate sorts, and the time reported is the aggregate time for all 10 sorts. Rows correspond to various array sizes.

A random array sorts faster than a killer array with the same size. However, sorting either type appears to be O(n^2). For both array types, doubling array size roughly quadruples the time to sort, suggesting O(n^2). I understand why this is the case for killer arrays, but am not sure why the algorithm scales this way for random arrays. Do others experience similar behavior? Is there a reason to expect this? It seems to indicate a bug in my code, but I couldn’t find one (although I didn’t test thoroughly).

```#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

void insertionsort(int* array, size_t n) {
for (int i = 1; i < n; ++i) {
for (int j = i; j > 0 && array[j-1] > array[j]; --j) {
int tmp = array[j];
array[j] = array[j-1];
array[j-1] = tmp;
}
}
}

int partition(int* array, size_t n) {
int pivot = array;
// improvement 3: median-of-three
if (n >= 3) {
int sample;
sample = array;
sample = array[n/2];
sample = array[n-1];
insertionsort(sample, 3);
pivot = sample;
}
int i = -1;
int j = n;
while (1) {
do ++i; while (array[i] < pivot);
do --j; while (array[j] > pivot);
if (i >= j) return j;
// improvement 1: inline swap
int tmp = array[i];
array[i] = array[j];
array[j] = tmp;
}
}

void quicksort(int* array, size_t n) {
if (n < 2) return;
// improvement 2: early cutoff to insertion sort
if (n < 22) {
insertionsort(array, n);
return;
}
int p = partition(array, n);
quicksort(array, p + 1);
quicksort(array + p + 1, n - p - 1);
}

void init_random_array(int* array, size_t n) {
for (int i = 0; i < n; ++i) {
array[i] = rand();
}
}

void init_median_of_three_killer(int* array, size_t n) {
// if n is odd, set the last element to n-1, and proceed
// with n decremented by 1
if (n % 2 != 0) {
array[n-1] = n;
--n;
}
size_t m = n/2;
for (int i = 0; i < m; ++i) {
// first half of array (even indices)
if (i % 2 == 0) array[i] = i + 1;
// first half of array (odd indices)
else array[i] = m + i + (m % 2 != 0 ? 1 : 0);
// second half of array
array[m + i] = (i+1) * 2;
}
}

void print_array(int* array, size_t n) {
printf("[");
for (int i = 0; i < n; ++i) {
if (i > 0) printf(", ");
printf("%d", array[i]);
}
printf("]");
}

int main(void) {
// Part 1: "write a program that creates a 'killer sequence'
//          for the median-of-three partition"
printf("Median-of-Three Killer Sequences\n");
for (int n = 1; n <= 20; ++n) {
printf("%d: ", n);
int array[n];
init_median_of_three_killer(array, n);
print_array(array, n);
printf("\n");
}
printf("\n");

// Part 2: "compare its time to the time required for sorting a
//          random partition"

srand(time(NULL));
clock_t tic;
clock_t toc;
int max_order = 18;
int* array = malloc((1<<max_order)*sizeof(int));
int loops = 10;
double elapsed;
double results[max_order+1];

// Random Array Sort Times
for (int order = 0; order <= max_order; ++order) {
int n = 1 << order;
elapsed = 0;
for (int i = 0; i < loops; ++i) {
init_random_array(array, n);
tic = clock();
quicksort(array, n);
toc = clock();
elapsed += (double)(toc-tic) / CLOCKS_PER_SEC;
}
results[order] = elapsed;
}

// Killer Array Sort Times
for (int order = 0; order <= max_order; ++order) {
int n = 1 << order;
elapsed = 0;
for (int i = 0; i < loops; ++i) {
init_median_of_three_killer(array, n);
tic = clock();
quicksort(array, n);
toc = clock();
elapsed += (double)(toc-tic) / CLOCKS_PER_SEC;
}
results[order] = elapsed;
}

printf("Random Array vs. Killer Array Sort Times\n");
for (int order = 0; order <= max_order; ++order) {
int n = 1 << order;
double random_sort_time = results[order];
double killer_sort_time = results[order];
printf("%d: %0.3f, %0.3f\n", n, random_sort_time, killer_sort_time);
}

free(array);
return 0;
}
```

Output:

```Median-of-Three Killer Sequences
1: 
2: [1, 2]
3: [1, 2, 3]
4: [1, 3, 2, 4]
5: [1, 3, 2, 4, 5]
6: [1, 5, 3, 2, 4, 6]
7: [1, 5, 3, 2, 4, 6, 7]
8: [1, 5, 3, 7, 2, 4, 6, 8]
9: [1, 5, 3, 7, 2, 4, 6, 8, 9]
10: [1, 7, 3, 9, 5, 2, 4, 6, 8, 10]
11: [1, 7, 3, 9, 5, 2, 4, 6, 8, 10, 11]
12: [1, 7, 3, 9, 5, 11, 2, 4, 6, 8, 10, 12]
13: [1, 7, 3, 9, 5, 11, 2, 4, 6, 8, 10, 12, 13]
14: [1, 9, 3, 11, 5, 13, 7, 2, 4, 6, 8, 10, 12, 14]
15: [1, 9, 3, 11, 5, 13, 7, 2, 4, 6, 8, 10, 12, 14, 15]
16: [1, 9, 3, 11, 5, 13, 7, 15, 2, 4, 6, 8, 10, 12, 14, 16]
17: [1, 9, 3, 11, 5, 13, 7, 15, 2, 4, 6, 8, 10, 12, 14, 16, 17]
18: [1, 11, 3, 13, 5, 15, 7, 17, 9, 2, 4, 6, 8, 10, 12, 14, 16, 18]
19: [1, 11, 3, 13, 5, 15, 7, 17, 9, 2, 4, 6, 8, 10, 12, 14, 16, 18, 19]
20: [1, 11, 3, 13, 5, 15, 7, 17, 9, 19, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

Random Array vs. Killer Array Sort Times
1: 0.000, 0.000
2: 0.000, 0.000
4: 0.000, 0.000
8: 0.000, 0.000
16: 0.000, 0.000
32: 0.000, 0.000
64: 0.000, 0.000
128: 0.000, 0.000
256: 0.000, 0.000
512: 0.000, 0.000
1024: 0.000, 0.000
2048: 0.000, 0.015
4096: 0.000, 0.078
8192: 0.015, 0.282
16384: 0.078, 1.218
32768: 0.282, 4.985
65536: 1.218, 20.344
131072: 4.985, 84.890
262144: 20.344, 334.953
```
4. Daniel said

> “It seems to indicate a bug in my code, but I couldn’t find one (although I didn’t test thoroughly).”

I found the bug. My declaration for the results array had a problem.
Before: double results[max_order+1];
After: double results[max_order+1];

The corrected time comparisons are below. As expected, the updated output no longer suggests that sorting random arrays is O(n^2), as opposed to my earlier problematic results.

```Random Array vs. Killer Array Sort Times
1: 0.000, 0.000
2: 0.000, 0.000
4: 0.000, 0.000
8: 0.000, 0.000
16: 0.000, 0.000
32: 0.000, 0.000
64: 0.000, 0.000
128: 0.000, 0.000
256: 0.000, 0.000
512: 0.000, 0.016
1024: 0.000, 0.000
2048: 0.000, 0.016
4096: 0.000, 0.078
8192: 0.015, 0.297
16384: 0.016, 1.265
32768: 0.031, 4.953
65536: 0.078, 20.500
131072: 0.188, 87.032
262144: 0.390, 349.21
```
5. […] sorting algorithm that we have been working up to in three previous exercises is introspective sort, or introsort, invented by David Musser in 1997 for the C++ Standard Library. […]