## Priority Queues

### May 5, 2009

A priority queue is a data structure in which items arrive randomly and depart according to an ordering predicate. It is similar to a normal queue, in which items depart in the order they arrive (first-in, first-out), and a stack, in which items depart in the opposite of the order in which they arrive (last-in, first-out). The operations on priority queues include insert to add a new item to the queue, find-first to return the first item in the queue, delete-first to return the remaining items after the first, and merge to merge two queues. Priority queues are used in simulations, where keys correspond to “event times” which must be processed in order, in job scheduling for computer systems, where more-important jobs must be performed beforeless-important jobs, and in many other applications.

There are many ways to implement priority queues. An unordered list makes it easy to insert new items, but each time an item is extracted the entire list must be scanned. An ordered list makes extraction quick but requires a scan of half the list, on average, each time an item is inserted. Binary trees give a total ordering of all the items in a priority queue, but we only need to be able to identify the first item, so they do more work than we need. We will implement priority queues using leftist heaps.

A heap is a binary tree in which each node precedes its two children in a total ordering; the ordering predicate may be less-than or greater-than, as appropriate for the particular heap. A leftist heap satisfies the additional criterion that the rank of each left node is greater than or equal to the rank of its right sibling, where the rank of a node is the length of its right spine. As a result, the right spine of any node is always the shortest path to an empty node. The name leftist heap derives from the fact that the left subtree is usually taller than the right subtree, so a drawing of a leftist heap tends to “lean” left.

The fundamental operation on leftist heaps is the merge of two leftist heaps. This is accomplished by merging their right spines in the same manner as merging two sorted lists; this preserves the heap-order property. Then the children of the nodes along that new path are swapped as necessary to preserve the leftist property.

Given merge, the remaining operations are trivial. Insert builds a singleton priority queue, then merges it to the existing priority queue. Find-first simply returns the item at the root of the tree. Delete-first merges the two children of the root.

Leftist heaps were invented by Clark Crane in 1972 and popularized by Donald Knuth in 1973.

Your task is to implement the priority queue data structure using leftist heaps. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.

Pages: 1 2

[…] Praxis – Priority Queues By Remco Niemeijer Today’s Programming Praxis problem is about priority queues. Specifically, we have to implement one using a […]

My Haskell solution (see http://bonsaicode.wordpress.com/2009/05/05/programming-praxis-priority-queues/ for a version with comments):

data PQueue a = Node Int a (PQueue a) (PQueue a) | Empty

rank :: PQueue a -> Int

rank Empty = 0

rank (Node r _ _ _) = r

node :: a -> PQueue a -> PQueue a -> PQueue a

node i l r = if rank l > rank r then node i r l else Node (1 + rank r) i l r

merge :: (a -> a -> Bool) -> PQueue a -> PQueue a -> PQueue a

merge _ Empty q = q

merge _ q Empty = q

merge p l@(Node _ il _ _) r@(Node _ ir lc rc) =

if p ir il then node ir lc (merge p l rc) else merge p r l

insert :: (a -> a -> Bool) -> a -> PQueue a -> PQueue a

insert p i = merge p (node i Empty Empty)

fromList :: (a -> a -> Bool) -> [a] -> PQueue a

fromList p = foldr (insert p) Empty

toList :: (a -> a -> Bool) -> PQueue a -> [a]

toList _ Empty = []

toList p (Node _ i l r) = i : toList p (merge p l r)

pqSort :: (a -> a -> Bool) -> [a] -> [a]

pqSort p = toList p . fromList p

main :: IO ()

main = print $ pqSort (<) [3, 7, 8, 1, 2, 9, 6, 4, 5] [/sourcecode]

A C solution, dealing with strings:

…and a Python variation on the same theme:

I struggled with the Python version quite a bit, on a conceptual level: couldn’t get my head around what objects to define. I’d start writing the tree and its methods, and realise I needed the methods to sometimes work on the tree, and sometimes on the nodes, which were distinct objects.

In the end, I had an epiphany when I read the ‘OOP style vs. recursive style’ paragraph on this page: http://cslibrary.stanford.edu/110/BinaryTrees.html

I did find this approach a bit artificial though, and somehow find the C way of dealing with self-referencing data structures more natural and elegant. That said, re-reading my C code after I’d worked on the Python version, I find I’d do a number of things differently now. I’ve often read that learning other languages will make one a better programmer, but that’s the first time I’ve experienced it first-hand in such an obvious way.