## Binary Search Tree: In-Order Predecessor And Successor

### February 12, 2013

We have seen several flavors of binary search trees, including classic bsts, treaps, red/black trees, and avl trees. For all of them we provided a `lookup` function that returned a requested key/value pair. In today’s exercise we write functions that return the predecessor and successor of a requested key, allowing a program to walk the tree in order. There are two common variants of these functions, one using parent pointers and one without; our exercise will use the variant without parent pointers, which is generally more useful.

We give the basic logic for the `successor` function; the `predecessor` function is similar, but reversed. A recursive function descends the tree searching for the requested key, keeping track of the current tree as the possible successor every time if follows the left sub-tree. If it finds the key, its successor is the minimum element of the right sub-tree, found by chasing left sub-trees until reaching a null node. If the search reaches a null node and hence fails to find the key, the next largest key in the tree is the successor, found in the possible successor that was saved at each recursive call.

Your task is to write `predecessor` and `successor` functions for binary search trees. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

### 3 Responses to “Binary Search Tree: In-Order Predecessor And Successor”

1. jpverkamp said

Here’s a version in Racket: Predecessor and successor in a binary search tree

I went a slightly different direction than the solution here with a shared function to return the node containing the value we were searching for and the last left and right branch all at once (the ability to return multiple `value`s is always nice). That made the actual `predecessor` and `successor` functions somewhat simpler:

```; get the predecessor to a value
(define (predecessor tr < val)
(define-values (l v r) (find-nodes tr < val))
(cond
[(tree-left? v)
(maximum (tree-left v))]
[(and (not (void? r)) (not (empty? r)))
(tree-value r)]))
```

I also included code for using this to implement a sorting algorithm. Dump everything into a tree, find the `minimum`, then repeatedly call `successor` until you run out of elements. Am I incorrect in thinking that this is actually still `O(n log n)`, albeit with a higher constant than most? It sorts a million element list only about 3x slower than a naive implementation of `quicksort` did, but insertion sort (known `O(n^2)`) didn’t even finish before I got bored.

2. I finally got around to this challenge. Here’s the code for successor in python.

```class Node:
def __init__(self, val=None, l=None, r=None):
self.l = l
self.r = r
self.val = val

def insert(node, val):
if node == None:
return
elif node.val == val:
return
elif node.val > val:
if node.l is None:
node.l = Node(val)
else:
insert(node.l, val)
else:
if node.r is None:
node.r = Node(val)
else:
insert(node.r, val)

def suc(node, val, ancestor=Node(None)):
'''
>>> mynode = Node(5)
>>> for v in "6 12 8 3 2 5 3 4 11 7 1".split():
...     insert(mynode,int(v))
>>> suc(mynode,5)
6
>>> suc(mynode,8)
11
>>> suc(mynode,11)
12
>>> suc(mynode,12)

>>> suc(mynode,2)
3
>>> suc(mynode,9)
11
>>> suc(mynode,15)

'''
if node is None:
return ancestor.val
if node.val == val:
if node.r is None:
return ancestor.val
else:  # Find leftmost of right node
node = node.r
while node.l is not None:
node = node.l
return node.val
elif node.val > val:
return suc(node.l, val, node)
else:
return suc(node.r, val, ancestor)

```