## Hoare’s Partition

### October 28, 2016

Here is our version of Hoare’s partition:

(define (partition lt? ary lo hi) (let ((x (vector-ref ary lo)) (i (- lo 1)) (j hi)) (let forever () (do-until (set! j (- j 1)) (le? lt? (vector-ref ary j) x)) (do-until (set! i (+ i 1)) (le? lt? x (vector-ref ary i))) (when (< i j) (swap! ary i j) (forever))) (values j ary)))

Here, *lo* is the index of the first element of the sub-array and *hi* is the index of the element just beyond the end of the sub-array; we find that more convenient than making *hi* point to the end of the sub-array, because then we can say `(quicksort lt? ary 0 (vector-length ary))`

to sort the entire array. We always partition around the first element of the array. Here is the corresponding quick sort:

(define (quicksort lt? ary lo hi) (if (< (+ lo 1) hi) (call-with-values (lambda () (partition lt? ary lo hi)) (lambda (p ary) (cond ((< (- p lo) (- hi p)) (quicksort lt? ary lo (+ p 1)) (quicksort lt? ary (+ p 1) hi)) (else (quicksort lt? ary (+ p 1) hi) (quicksort lt? ary lo (+ p 1)))))) ary))

We always recur on the smaller partition first, because otherwise the stack can blow out.

By the way, we use a macro to provide the `do-until`

control flow. Scheme, unlike algol-ish languages, provides few looping constructs. The Standard Prelude provides a `while`

macro, which tests at the top, but no `do-while`

or `do-until`

macro, which tests at the bottom. Here are our versions of `do-while`

and `do-until`

:

(define-syntax do-while (syntax-rules (loop) ((do-while body ... condition) (let loop () body ... (when condition (loop))))))

(define-syntax do-until (syntax-rules (loop) ((do-until body ... condition) (let loop () body ... (unless condition (loop))))))

It is one of the strengths of Scheme that we can create new control flow macros as needed. We also need to convert the `lt?`

predicate to a `le?`

predicate:

(define (le? lt? x y) (not (lt? y x)))

We write the program using `le?`

rather than `lt?`

because that’s the predicate Hoare used, and it’s easier not to change things when translating pseudocode to a working program.

We test the program by writing a shuffle function that randomly re-orders the elements of an array, shuffles and sorts an initially-ordered array a bunch of times, and tests that each result is sorted. Here are `sorted?`

and `shuffle`

:

(define (sorted? lt? ary) (let ((len (vector-length ary))) (let loop ((i (- len 1))) (if (zero? i) #t (if (lt? (vector-ref ary (- i 1)) (vector-ref ary i)) (loop (- i 1)) #f)))))

(define (shuffle ary) (do ((v ary) (n (vector-length ary) (- n 1))) ((zero? n) ary) (let* ((r (random n)) (t (vector-ref ary r))) (vector-set! ary r (vector-ref ary (- n 1))) (vector-set! ary (- n 1) t))))

And here’s the test code, which performs *n* tests on arrays of *k* elements:

(define (test-quicksort k n) (let ((ary (list->vector (range k)))) (do ((n n (- n 1))) ((zero? n)) (quicksort < ary 0 k) (unless (sorted? < ary) (display "ERROR ") (display ary) (newline)) (shuffle ary))))

The test code is purposely evil: the very first test sorts an ordered array, which is the worst-case test for quicksort. Our quicksort is quite fast: (test-quicksort 10000 100) finishes in just over a second, and more than half of that time is spent on the worst-case initially-ordered array. You can run the program at http://ideone.com/ttjYgA, where you will also see `swap!`

and `range`

.

There are several improvements we can make to this code, including switching to insertion sort when the sub-arrays become small and improving the choice of the pivot to make worst-case performance less likely. We’ll look at those improvements, and others, in the next exercise.

Found this code on Stackexchange. I found it to be quite fast. Source code is in D…

I did a little speed comparison with some implementations of Quicksort. I took the code which I’d posted earlier on PP, and also three implementations from Rosetta Code (https://www.rosettacode.org/wiki/Sorting_algorithms/Quicksort#D). Here’s the code in D:

Compiled with [i]dmd -O -release -inline -of”qsort” “qsort.d”[/i] it gives these times…

Compiled with [i]ldc2 -O3 -enable-inlining -of”qsort” “qsort.d”[/i] it gives these times…

Maybe there’s an even faster implementation….

Here’s a solution in C. I had some bugs in my initial implementation that I’ve fixed, but I’m still not feeling confident that this works properly, as I haven’t done much testing. In practice, I’ve used Lomuto partitioning when implementing quicksort. IIRC, I implemented quicksort since I needed to track where each item an unsorted array was placed in the sorted array (this also could have been figured out *after* sorting, but would have been O(n^2), so I tracked it at sort-time).

Output:

> “IIRC, I implemented quicksort since I needed to track where each item an unsorted array was placed in the sorted array (this also could have been figured out *after* sorting, but would have been O(n^2), so I tracked it at sort-time).”

I suppose that using a hash table could have worked as well for what I was trying to do (create a mapping of item to index), but I don’t recall the details, and may not have had hash tables available or even a standard library sorting algorithm. I was using VBA (Visual Basic for Applications) to write a Microsoft Office plugin. It had to run on both Windows and Mac, so some of the Windows-only extensions weren’t available.

[…] today’s exercise we will make several improvements to the quicksort program of the previous exercise, working in small steps and measuring our progress throughout. We make the following […]

[…] two previous exercises we’ve been working toward a variant of quicksort that has guaranteed O(n log n) […]

[…] sorting algorithm that we have been working up to in three previous exercises is introspective sort, or introsort, invented by David Musser in 1997 for the […]