## Priority Queues

### May 5, 2009

A priority queue is a data structure in which items arrive randomly and depart according to an ordering predicate. It is similar to a normal queue, in which items depart in the order they arrive (first-in, first-out), and a stack, in which items depart in the opposite of the order in which they arrive (last-in, first-out). The operations on priority queues include insert to add a new item to the queue, find-first to return the first item in the queue, delete-first to return the remaining items after the first, and merge to merge two queues. Priority queues are used in simulations, where keys correspond to “event times” which must be processed in order, in job scheduling for computer systems, where more-important jobs must be performed beforeless-important jobs, and in many other applications.

There are many ways to implement priority queues. An unordered list makes it easy to insert new items, but each time an item is extracted the entire list must be scanned. An ordered list makes extraction quick but requires a scan of half the list, on average, each time an item is inserted. Binary trees give a total ordering of all the items in a priority queue, but we only need to be able to identify the first item, so they do more work than we need. We will implement priority queues using leftist heaps.

A heap is a binary tree in which each node precedes its two children in a total ordering; the ordering predicate may be less-than or greater-than, as appropriate for the particular heap. A leftist heap satisfies the additional criterion that the rank of each left node is greater than or equal to the rank of its right sibling, where the rank of a node is the length of its right spine. As a result, the right spine of any node is always the shortest path to an empty node. The name leftist heap derives from the fact that the left subtree is usually taller than the right subtree, so a drawing of a leftist heap tends to “lean” left.

The fundamental operation on leftist heaps is the merge of two leftist heaps. This is accomplished by merging their right spines in the same manner as merging two sorted lists; this preserves the heap-order property. Then the children of the nodes along that new path are swapped as necessary to preserve the leftist property.

Given merge, the remaining operations are trivial. Insert builds a singleton priority queue, then merges it to the existing priority queue. Find-first simply returns the item at the root of the tree. Delete-first merges the two children of the root.

Leftist heaps were invented by Clark Crane in 1972 and popularized by Donald Knuth in 1973.

Your task is to implement the priority queue data structure using leftist heaps. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.

Pages: 1 2

### 5 Responses to “Priority Queues”

1. […] Praxis – Priority Queues By Remco Niemeijer Today’s Programming Praxis problem is about priority queues. Specifically, we have to implement one using a […]

2. Remco Niemeijer said

My Haskell solution (see http://bonsaicode.wordpress.com/2009/05/05/programming-praxis-priority-queues/ for a version with comments):

data PQueue a = Node Int a (PQueue a) (PQueue a) | Empty

rank :: PQueue a -> Int
rank Empty = 0
rank (Node r _ _ _) = r

node :: a -> PQueue a -> PQueue a -> PQueue a
node i l r = if rank l > rank r then node i r l else Node (1 + rank r) i l r

merge :: (a -> a -> Bool) -> PQueue a -> PQueue a -> PQueue a
merge _ Empty q = q
merge _ q Empty = q
merge p l@(Node _ il _ _) r@(Node _ ir lc rc) =
if p ir il then node ir lc (merge p l rc) else merge p r l

insert :: (a -> a -> Bool) -> a -> PQueue a -> PQueue a
insert p i = merge p (node i Empty Empty)

fromList :: (a -> a -> Bool) -> [a] -> PQueue a
fromList p = foldr (insert p) Empty

toList :: (a -> a -> Bool) -> PQueue a -> [a]
toList _ Empty = []
toList p (Node _ i l r) = i : toList p (merge p l r)

pqSort :: (a -> a -> Bool) -> [a] -> [a]
pqSort p = toList p . fromList p

main :: IO ()
main = print \$ pqSort (<) [3, 7, 8, 1, 2, 9, 6, 4, 5] [/sourcecode]

3. Jebb said

A C solution, dealing with strings:

```#include <stdio.h>
#include <stdlib.h>
#include <string.h>

struct node {
int rank;
char *name;
struct node *left;
struct node *right;
};

void *del_min(struct node **, char *);
struct node *insert_value(char *, struct node *);
struct node *merge(struct node *, struct node *);
void print_node(struct node *node);
int debug_print_tree(struct node *);
void swap(struct node **, struct node **);
struct node *make_node(char *);
void del_node(struct node *);

int main()
{
struct node *root = NULL;
char c, *cp;
char buffer;
int i;
cp = buffer;
while ((c = getchar()) != EOF) {
if ((c != '\n') && (buffer - cp < 98))
*cp++ = c;
else if (c == '\n') {
*cp = '\0';
root = insert_value(buffer, root);
cp = buffer;
}
}
printf("Sorted names list:\n");
while (root != NULL) {
del_min(&root, buffer);
printf("%s\n", buffer);
}
return 0;
}

void *del_min(struct node **root, char *first)
/* Returns the first (in alphabetical order) name in the tree,
* and deletes it, merging the two sub-trees
* We need to pass the address of the tree root (struct node **),
* as the tree will have a new root after running this function*/
{
struct node *tmp_root;
strcpy(first, (*root)->name);
tmp_root = merge((*root)->left, (*root)->right);
del_node(*root);
(*root) = tmp_root;
}

struct node *insert_value(char *name, struct node *tree)
{
struct node *newStudent, *newRoot;
newStudent = make_node(name);
newRoot = merge(newStudent, tree);
return newRoot;
}

struct node *merge(struct node *treeA, struct node *treeB)
{
if (treeA == NULL) return treeB;
if (treeB == NULL) return treeA;
if (strcmp(treeB->name, treeA->name) < 0)
swap(&treeA, &treeB);
treeA->right = merge(treeA->right, treeB);
/* The right child is guaranteed not to be NULL, but
* we don't know that the left child is not NULL!
* The rank of the NULL node is taken to be -1, so it will be
* swapped with a childless, non-NULL node */
if (treeA->right->rank > (treeA->left ? treeA->left->rank : -1)) {
swap(&(treeA->right), &(treeA->left));
}
if (treeA->right == NULL)
treeA->rank = 0;
else
treeA->rank = treeA->right->rank + 1;
return treeA;
}

void print_node(struct node *mynode)
{
printf("Node - rank : %d ; name : %s\n", mynode->rank, mynode->name);
if (mynode->left)
printf("\tleft: %s\n", mynode->left->name);
if (mynode->right)
printf("\tright: %s\n", mynode->right->name);
}

int debug_print_tree(struct node *root)
/* Returns the number of nodes printed */
{
if (root == NULL)
return 0;
print_node(root);
printf("----------\n");
return 1 + debug_print_tree(root->left) + debug_print_tree(root->right);
}

void swap(struct node **nodeA, struct node **nodeB)
{
struct node *tmpnode;
tmpnode = *nodeA;
*nodeA = *nodeB;
*nodeB = tmpnode;
}

struct node *make_node(char *new_student)
{
struct node *tmp_node;
tmp_node = (struct node *)malloc(sizeof(struct node));
tmp_node->rank = 0;
tmp_node->name = strdup(new_student);
tmp_node->left = NULL;
tmp_node->right = NULL;
return tmp_node;
}

void del_node(struct node *oldnode)
{
free(oldnode->name);
free(oldnode);
}

```
4. Jebb said

…and a Python variation on the same theme:

```#!/usr/bin/python2.7
import copy
class _lNode(object):
def __init__(self, value):
self.key = value
self.rank = 0
self.left = None
self.right = None

def _merge(treeA, treeB):
# This function isn't robust, it will produce circular references
# when merging a tree with a part of itself for example
# Hopefully safe for use with the methods in the lTree class, though
if treeA == None: return treeB
if treeB == None: return treeA
if treeB.key < treeA.key:
treeB, treeA = treeA, treeB
treeA.right = _merge(treeA.right, treeB)
if (treeA.left and treeA.left.rank or -1 ) < treeA.right.rank:
treeA.left, treeA.right = treeA.right, treeA.left
if treeA.right:
treeA.rank = treeA.right.rank + 1
else:
treeA.rank = 0
return treeA

class lTree(object):
def __init__(self):
self.root = None
def insert(self, value):
tmpNode = _lNode(value)
if self.root == None:
self.root = tmpNode
return
else:
self.root = _merge(self.root, tmpNode)
def del_min(self):
if self.root == None:
return None
else:
tmpVal = self.root.key
self.root = _merge(self.root.left, self.root.right)
return tmpVal
def debug_print(self):
#begin helper function
def _h_debug_print(node):
if node == None:
return
else:
print 'Value: %s, rank: %d' % (node.key, node.rank)
_h_debug_print(node.left)
_h_debug_print(node.right)
#end helper function
if self.root == None:
print 'Empty queue!'
return
else:
_h_debug_print(self.root)

if __name__ == "__main__":
f = open('test', 'r')
f.close()
myTree = lTree()
for n in nameslist: myTree.insert(n)
print 'Sorted names list:'
while myTree.root:
print myTree.del_min()
```
5. Jebb said

I struggled with the Python version quite a bit, on a conceptual level: couldn’t get my head around what objects to define. I’d start writing the tree and its methods, and realise I needed the methods to sometimes work on the tree, and sometimes on the nodes, which were distinct objects.

In the end, I had an epiphany when I read the ‘OOP style vs. recursive style’ paragraph on this page: http://cslibrary.stanford.edu/110/BinaryTrees.html

I did find this approach a bit artificial though, and somehow find the C way of dealing with self-referencing data structures more natural and elegant. That said, re-reading my C code after I’d worked on the Python version, I find I’d do a number of things differently now. I’ve often read that learning other languages will make one a better programmer, but that’s the first time I’ve experienced it first-hand in such an obvious way.