Pessimal Algorithms and Simplexity Analysis

October 25, 2013

I’ve been reading Pessimal Algorithms and Simplexity Analysis by Andrei Broder and Jorge Stolfi. It’s fun. They write three programs: search for an item in a sorted array, enumerate all items in a connected graph, and sort an array into ascending order; we’ll look at the sorting algorithm, but you may want to look at the others yourself. Here is their sorting algorithm:

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

To get a firmer grasp of the multiply and surrender method, let us follow the step-by-step development of the slowsort algorithm. We can decompose the problem of sorting n numbers Al, A2, …, An in ascending order into (1) finding the maximum of those numbers, and (2) sorting the remaining ones. Subproblem (1) can be further decomposed into (1.1) find the maximum of the first [n/2] elements, (1.2) find the maximum of the remaining [n/2] elements, and (1.3) find the largest of those two maxima. Finally, subproblems (1.1) and (1.2) can be solved by sorting the specified elements and taking the last element in the result. We have thus multiplied the original problem into three slightly simpler ones (sort the first half, sort the second half, sort all elements but one), plus some overhead processing. We continue doing this recursively until the lists have at most one element each, at which point we are forced to surrender.

I didn’t copy the code from the article; if you go look at it, beware the bug! The time complexity of the algorithm is O(nlog2(n)/2).

Your task is to write the slowsort program; you may also want to write the research and bwfs programs mentioned in the article. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.

Pages: 1 2

6 Responses to “Pessimal Algorithms and Simplexity Analysis”

  1. Paul said

    Slowsort in Python.

    def slowsort(L):
        n = len(L)
        if n == 1:
            return L
        mid = n // 2
        mx = max(slowsort(L[:mid])[-1], slowsort(L[mid:])[-1])
        L.remove(mx)
        return slowsort(L) + [mx]
    
  2. Slowsort implementation in Racket.

    #lang racket
    
    (define (slow-sort! v [i 0] [j (vector-length v)])
      (cond [(<= (- j i) 1) v]
            [else
             (define k (quotient (+ j i) 2))
             (slow-sort! v i k)
             (slow-sort! v k j)
             (define max-a (vector-ref v (- k 1)))
             (define max-b (vector-ref v (- j 1)))
             (vector-set! v (- k 1) (min max-a max-b))
             (vector-set! v (- j 1) (max max-a max-b))
             (slow-sort! v i (- j 1))]))
    
  3. Jussi Piitulainen said

    Non-in-place and extracting minima instead of maxima. CoffeeScript. (Apparently there is no integer division in this language, whether flooring or another kind, but there are a number of “tricks” like x|0 to truncate an x.)

    sort = (arr) ->
       [arr, res] = [arr[0...arr.length], []]
       while arr.length > 0
          i = min(arr)
          res.push(i)
          k = arr.indexOf(i)
          arr[k..k] = []
       res

    min = (arr) ->
       if arr.length is 1 then arr[0]
       else
          a = sort(arr[0...arr.length/2|0])[0]
          b = sort(arr[arr.length/2|0...arr.length])[0]
          if a < b then a else b

    console.log(sort([3,1,4,1,5,9,2,6]))

  4. […] find the index of a target x in a sorted array. We looked at a different algorithm that paper in a previous exercise. I’ll leave it to you to fetch the paper and enjoy the sincerity of the authors’ […]

  5. Mike said

    In studying the slowsort algorithm, I realized that it is entirely to eager to move A[m] into place by swapping A[m] with A[j]. A better (i.e., worse) algorithm would swap A[m] with A[m+1], and repeat until A[m] is at the end of the list (like a bubble sort). Clearly the improved algorithm, which I call ‘slowersort’, is more pessimal.

    def slowersort(awry, i=None, j=None):
        if i is None:
            i = 0
        if j is None:
            j = len(awry) - 1
        if i < j:
            m = (i + j) // 2
            slowersort(awry, i, m)
            slowersort(awry, m + 1, j)
            if awry[m] > awry[j]:
                for k in range(m, j):
                    awry[k], awry[k + 1] = awry[k + 1], awry[k]
            slowersort(awry, i, j - 1)
            
    awry = list('hgfedcba')
    print(awry)
    slowersort(awry)
    print(awry)
    
  6. Daniel Forgues said

    In general, is there a pessimal (either in time or space) algorithm for any given problem, where there is provably no algorithm that is more reluctant (either in time or space)? Of course, without any cheating to waste resources (either in time or space). For example, calculating Fibonacci numbers recursively (without memoization): is this provably the slowest algorithm for Fibonacci numbers?

Leave a comment