Introspective Sort
November 11, 2016
We reuse most of the insertion sort and quicksort code from the previous exercises, including the partitioning algorithm. Introsort looks like this:
(define k 2)
(define (introsort lt? ary lo hi depth) (when (< cutoff (- hi lo)) (if (zero? depth) (heapsort lt? ary lo hi) (call-with-values (lambda () (partition lt? ary lo hi)) (lambda (p ary) (cond ((< (- p lo) (- hi p)) (introsort lt? ary lo (+ p 1) (- depth 1)) (introsort lt? ary (+ p 1) hi (- depth 1))) (else (introsort lt? ary (+ p 1) hi (- depth 1)) (introsort lt? ary lo (+ p 1) (- depth 1)))))))) ary)
Then the complete sorting algorithm is this:
(define (sort lt? ary) (let* ((len (vector-length ary)) (depth (* k (round (log len))))) (introsort lt? ary 0 len) (insert-sort lt? ary 0 len)))
Timing shows that introsort is faster than our fastest quicksort, because the bad partitions that sometimes occur with quicksort are handled by heapsort:
> (time (time-quicksort 1000000 10)) (time (time-quicksort 1000000 ...)) 23 collections 5.413136451s elapsed cpu time, including 0.069669265s collecting 5.477370882s elapsed real time, including 0.071198931s collecting 194865504 bytes allocated, including 190784144 bytes reclaimed
The improvement is small, from 5.448 seconds in the previous exercise to 5.413 seconds, but in addition to the improved time this program offers a guaranteed O(n log n) time complexity, with no niggling worries about a quadratic blowup, and it is particularly elegant. Well done, David Musser!
You can run the program at http://ideone.com/N1J8W8, where you will see the complete code, including heapsort.
Is that a natural logarithm?
The
log
function in Scheme, that I used in my program, is a natural logarithm to base e. Theoretically, the logarithm should be to base 2, since you are calculating the depth of recursion assuming a perfect split into two equal-size sub-arrays at each recursive call. In practice, you probably want to try many different values of k to find the optimum value for your circumstances; a value close to 1 means that you will be making many calls to heapsort, which is naturally slower than quicksort, but a value far from 1 means that you are continuing to make non-productive recursive calls rather than switching to heapsort.Fair enough, though this means that with k=2, we are doing heapsort quite a lot, even with random input (so I’m surprised that introsort seems to be faster, though that might just be noise).
I went back and looked at Musser’s paper. He uses 2 * floor(log2 n), but suggests testing to determine an empirically good value that produces good results with your environment. I’ve done a little bit of experimenting, but intend to do more.
Here’s a solution in C99.
The program output is included at the bottom of this post. It shows runtimes for various scenarios. Each experiment was conducted with 10 separate sorts, and the time reported is the aggregate time for all 10 sorts. Rows correspond to various array sizes.
Column 1: array size
Column 2: Random array quicksort
Column 3: Random array heapsort
Column 4: Random array introsort
Column 5: Killer array quicksort
Column 6: Killer array heapsort
Column 7: Killer array introsort
The killer arrays were generated using the ‘Median-Of-Three Killer Sequence’ procedure from an earlier problem.
For all experiments, quicksort includes the optimizations from earlier problems, 1) inline swap, 2) early cutoff to insertion sort, and 3) median-of-three pivot selection. These same optimizations were also used for introsort.
I increased the stack size to prevent stack overflows. Compiler optimizations were disabled.
This updated main function includes column numbers in the output.
Output:
@Daniel: good stuff, but you want to be calculating the depth limit k*log(n) at the start and not at each recursive call.
@matthew, Thanks!
Here’s the updated code along with updated output.
Output: