## Second Largest Item

### June 2, 2017

Any solution based on sorting will require O(*n* log *n*) comparisons, which is too many. Any solution based on heaps will require O(*n* log *n*) comparisons to build the heap plus O(2 log *n*) comparisons to extract the second largest, which is better but still not good enough. Selection sort uses O(2*n* – 3) comparisons regardless of the initial order of the data. Our solution uses O(2*n* – 3) comparisons in the worst case, just as selection sort, but only O(*n* + log *n*) in the expected case; I don’t know of a better algorithm:

(define (second-max lt? xs) (if (or (null? xs) (null? (cdr xs))) (error 'second-max "not enough data") (begin (when (lt? (car xs) (cadr xs)) (set! xs (cons (cadr xs) (cons (car xs) (cddr xs))))) (let loop ((first (car xs)) (second (cadr xs)) (xs (cddr xs))) (cond ((null? xs) second) ((lt? second (car xs)) (if (lt? first (car xs)) (loop (car xs) first (cdr xs)) (loop first (car xs) (cdr xs)))) (else (loop first second (cdr xs))))))))

The outer loop makes *n* – 1 comparisons. The trick is to make the primary comparison on the *second* largest item found so far, so only when you know that you have a new second largest item do you have to compare to the largest item, which happens O(log *n*) times in the expected case. Here’s an example:

> (second-max < '(4 1 9 2 3 8 5 7 6)) 8

You can run the program at http://ideone.com/2bNnhJ.

According to the QuickSelect algorithm, here’s some modified code from the RosettaCode example in D

Output: 12

Can you check the result by changing the last number in the array to 13. Do you get 13 or 11 as the result?

You’re right, Ernie. My code works only for arrays with distinct items. My bad.

Same algorithm, different expression.

By the way, isn’t there something like a heap or a priority queue with a specified capacity, which simply drops values that don’t fit there any more? Then this algorithm essentially implements that data structure, with room for just two elements, in the two variables where it keeps the two best elements.

@Jussi: Yes, there is a size-limited priority queue; we studied in a previous exercise.

MCL> D ^SECLAR

Numbers in order:

11

21

29

42

56

58

67

69

73

93

Second largest number: 73

The above language is MUMPS

Klong

Second largest is the second smallest if you order elements backwards. The C++ standard library can do that.

Here’s a Haskell version. It uses radix sort, so it requires no comparisons. (Does this make me a bad person?)

@kernelbob: Try this….

[sourceode lang=”cpp”]

#include

#include

#include

#include

template

RandomIterator second_largest(RandomIterator first, RandomIterator last)

{

typedef

typename std::iterator_traits::value_type

value_type;

std::nth_element(first, first + 1, last, std::greater());

return first + 1;

}

int main()

{

std::array s = {5, 9, 4, 2, 8, 6, 1, 9, 0, 3};

std::cout << *second_largest(s.begin(), s.end()) << std::endl;

return 0;

}

[/sourcecode]

As you can see I've changed one item (from 7 to 9) and the output will be 9 instead of 8. Just like my solution above this one works for arrays iff the largest item appears only once.

Can't tell about the solutions of bookofgraham and Steve, though.

Using a heap and heapreplace in Python, the heap never grows larger than 2 elements.

@Milbrae, thanks.

I misread the problem statement.

@Milbrae:

In MUMPS, I put the numbers as the subscripts and so they can only appear once. In KLONG I believe my solution will only work if the numbers appear only once.

I have a slight problem with the discussion preceding your solution example. You state sorting will be O(n log n), using a heap will be O(n log n) + O(2 log n), and that the latter is more efficient — because O(n log n) > O(2 log n), i assume.

However, this is an inaccurate comparison because sorting will require O(n log n) to build the structure, yet you get constant time for retrieving the element (or O(n) maybe). Whereas the heap has O(n log n) for building the structure and O(2 log n) for retrieval. The inequality then should really be O(n log n) + C C < O(2 log n).

So for the one-time task of finding the second largest item, the sorting approach has constant time and would always be more efficient. Do you agree?

Oops something weird with formatting, end of second paragraph: O(n log n) + C < O(n log n) + O(2 log n), and simplified: C < O(2 log n)

This Klong solution permits multiple numbers of the same value

or

@Ardnew: On reflection, the hash methods are O(

nlogk), which makes them competitive (within a constant factor) with the O(nmethod of this exercise. That’s only true becausek= 2.We can maintain two maximum values during a loop, if current iteration value is larger than the first max number, we set the the value of the second max number to the first (only if its larger), then update the first max value, do same for second max number.