The First N Primes

October 14, 2011

The Sieve of Eratosthenes computes all primes less than some given n. But sometimes you need the opposite, the first n primes, not knowing the largest prime in the set. One method is to calculate the nth prime, using a prime-counting formula, then use the Sieve of Eratosthenes to generate the list. Or instead of calculating the nth prime directly, you can over-estimate the nth prime using the formula Pn < n log (n log n), then trim the list as needed; that’s the natural logarithm to the base e.

Another method is described by Melissa O’Neill in her article The Genuine Sieve of Eratosthenes. Her method is based on sifting each prime, and makes exactly the same sequence of calculations as the regular Sieve of Eratosthenes, but instead of marking bits in a bitarray it uses a priority queue to keep track of composites, one item in the priority queue for each prime; thus, instead of marking all the composite multiples of each sifting prime all at once, in the bitarray, O’Neill’s method keeps a priority queue of the next item in each sifting sequence, inserts a new sifting sequence in the priority queue as each new prime is encountered, and resets the next item in each sifting sequence as it is reached.

Thus, ignoring 2 and the even numbers, O’Neill’s method begins with i = 3 and an empty priority queue. Since 3 is not in the priority queue, it is prime, so it is output, the pair i×i/i×2 = 9/6 is added to the queue, and i is advanced to i + 2 = 5. Now since 5 is not in the queue, it is prime, so it is output, the pair 25/10 is added to the queue, and i is advanced to 7. Now since 7 is not in the queue, it is prime, so it is output, the pair 49/14 is added to the queue, and i is advanced to 9; at this point the queue is [9/6 25/10 49/7]. Now i = 9 is in the queue, so it is composite, and the pair 9/6 is removed from the priority queue and replaced by the pair 15/6, where 15 is calculated as the current composite 9 plus the skip value 6. And so on, until the desired number of primes has been generated. Note that in some cases a composite will be a member of multiple sifting sequences, and each must be replaced; for instance, when i reaches 45, it is a member of the sifting sequences for both 3 and 5, so the priority queue will have pairs 45/6 and 45/10, which should be replaced by 51/6 and 55/10, respectively.

Your task is to implement O’Neill’s algorithm to compute the first n primes. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Advertisement

Pages: 1 2

15 Responses to “The First N Primes”

  1. Mike said

    Python 3 version. Uses heapq from the standard library for the priority queue.

    I began odd_number at 5 to eliminate a test for when ‘next_composite’ isn’t set yet

    from heapq import *
    from itertools import count, islice
    
    
    def oneill():
        yield 2
        yield 3
    
        queue = []
        next_composite, skip = 9, 6
    
        for odd_number in count(5, 2):
            if odd_number < next_composite:
                yield odd_number
                heappush(queue, (odd_number**2, 2*odd_number))
    
            else:
                while odd_number == next_composite:
                    next_composite, skip = heappushpop(queue, (next_composite+skip, skip))
    
    
    def first_n_primes(n):
        return list(islice(oneill(), n))
    
    
  2. CyberSpace17 said

    My C++ solution
    http://ideone.com/CWsQx

  3. CyberSpace17 said

    http://ideone.com/B0Bpy
    my solution again with an amendment to line 18

  4. David said

    Factor version, uses Priority Queue library and lazy lists.

    USING: kernel math heaps sequences arrays lists lists.lazy locals
           io formatting ;
    IN: oneil-sieve
    
    : o'neil-init  ( -- heap )
        6 9 <min-heap> [ heap-push ] keep ;
    
    : heap-top  ( pq -- n )
        heap-peek nip ;
    
    : isprime?  ( pq n -- ? )
        [ heap-top ] dip = not ;
    
    : push-prime  ( pq n -- )
        [ 2 * ] [ sq ] bi  rot heap-push ;
    
    :: advance ( pq n -- )
        [ pq heap-top n = ]
            [ pq heap-pop  over +  pq heap-push ]
        while ;
    
    : o'neil-step ( pair -- pair )
        first2  2 +
           [ 2dup isprime? not ]
              [ [ advance ] 2keep 2 + ] 
           while
           [ push-prime ] 2keep 2array ;
    
    : o'neil-sieve  ( -- lazy-list )
        o'neil-init 3 2array   [ o'neil-step ]  lfrom-by 
        [ second ] lazy-map ;
    
    : all-primes ( -- lazy-list )
        o'neil-sieve  2 swons ;
    
    CONSTANT: items/line 10
    
    : nl-cond ( counter list -- counter list )
        [ 1 + ] dip  over items/line =  [ [ drop 0 ] dip nl ] when ;
    
    : .primes  ( lazy-list -- )
        0 swap
        [ dup nil? ]
            [ unswons "%5d " printf  nl-cond ]
        until drop
        0 = not [ nl ] when ;
    

    Test:

    ( scratchpad ) all-primes [ 512 < ] lwhile .primes
        2     3     5     7    11    13    17    19    23    29 
       31    37    41    43    47    53    59    61    67    71 
       73    79    83    89    97   101   103   107   109   113 
      127   131   137   139   149   151   157   163   167   173 
      179   181   191   193   197   199   211   223   227   229 
      233   239   241   251   257   263   269   271   277   281 
      283   293   307   311   313   317   331   337   347   349 
      353   359   367   373   379   383   389   397   401   409 
      419   421   431   433   439   443   449   457   461   463 
      467   479   487   491   499   503   509 
    
  5. David said

    This method is better suited to Prolog than the standard sieve, due to the fact that arrays do not exist in that language, and generators are pretty much how things are done, so without further ado, in Prolog…

    % O'neil sieve
    
    ?- use_module(library(heaps)).
    
    prime(2).
    prime(N) :- prime_heap(N, _).
    
    prime_heap(3, H) :- list_to_heap([9-6], H).
    prime_heap(N, H) :-
        prime_heap(M, H0), N0 is M + 2,
        next_prime(N0, H0, N, H).
    
    next_prime(N0, H0, N, H) :-
        \+ min_of_heap(H0, N0, _),
        N = N0, Composite is N*N, Skip is N+N,
        add_to_heap(H0, Composite, Skip, H).
    next_prime(N0, H0, N, H) :-
        min_of_heap(H0, N0, _),
        adjust_heap(H0, N0, H1), N1 is N0 + 2,
        next_prime(N1, H1, N, H).
    
    adjust_heap(H0, N, H) :-
        min_of_heap(H0, N, _),
        get_from_heap(H0, N, Skip, H1),
        Composite is N + Skip, add_to_heap(H1, Composite, Skip, H2),
        adjust_heap(H2, N, H).
    adjust_heap(H, N, H) :-
        \+ min_of_heap(H, N, _).
    
    prime_seq(Test, Prime) :-
        prime(P),
        (call(Test, P) -> Prime = P ; !, false).
    
    upto_1000(X) :- X < 1000.
    
       ?- setof(P, prime_seq(upto_1000, P), S), length(S, L).
    L = 168,
    S = [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,
    157,163,167,173,179,181,191,193,197,199,211,223,227,229,233,239,241,251,257,263,269,271,277,281,283,293,307,311,313,317,
    331,337,347,349,353,359,367,373,379,383,389,397,401,409,419,421,431,433,439,443,449,457,461,463,467,479,487,491,499,503,
    509,521,523,541,547,557,563,569,571,577,587,593,599,601,607,613,617,619,631,641,643,647,653,659,661,673,677,683,691,701,
    709,719,727,733,739,743,751,757,761,769,773,787,797,809,811,821,823,827,829,839,853,857,859,863,877,881,883,887,907,911,
    919,929,937,941,947,953,967,971,977,983,991,997]
    
  6. Willy Good said

    Several comments or improvements as follows:

    1. O’Neill introduced the idea of Wheel Factorization, in her sample case up to the 48 gap value entries for the 2/3/5/7 wheel but acknowledged that bigger wheels could be used, although she underestimated their value. A 2/3/5/7/11/13/17/19 wheel offers a reduction in execution time to about 58% as compared to using her wheel for the primes up to 100 million.

    2. O’Neill didn’t realize it in her referenced paper but in later work realized that the work can be reduced by postponing the adding of a prime composite series to the map/PriorityQueue/HashTable until the range has increased to the square of the current base prime value. This makes the versions in her paper using a (Binary Tree) Map or Priority Queue faster since they add a (log n) term to the performance which would change to the log of the square root of n or a constant factor of one half of log n. It doesn’t help a HashTable implementation as much with their amortized O(1) performance, but it greatly reduces the memory requirements for both from O(n / log n) to O(square root of n / log n).

    3. The Priority Queue as used is fine for smaller prime ranges but it does add this log n to the performance making it O(n log n log log n) from O(n log log n); use of a HashTable with amortized O(1) performance gives it the original performance at the cost of the constant factor average overhead of handling the hashing. Even if this constant factor was 10, the HashTable would be better for sieve ranges over about 500 million (the factor may be much less than that depending on the implementations). Most languages offer a standard library function for a HashTable/Dictionary but not Scheme R5RS, although that is correcting in R6RS. Python and JavaScript make it easy because every object is a HashTable (ie. built in to the languages).

    4. Incremental sieves based on these are always much slower than incremental sieves based on page segmented arrays due to these constant factor overheads in processing the Map/Priority Queue/HashTable, as sieving arrays consists of nothing but very tight inner culling loops.

    My Python version with a huge wheel factorization, parameterized for easy experimentation (wheel larger than this aren’t recommended due to excessive memory use, but one can try smaller to find relative improvements):

    FRSTBP = 23; WHLPRMS = [ 2, 3, 5, 7, 11, 13, 17, 19 ]               # base wheel primes
    def makeGAPS():
        whlsz = reduce(lambda o, x: o * x, WHLPRMS, 1) # length of the repeating wheel
        whlhts = reduce(lambda o, x: o * (x - 1), WHLPRMS, 1) # number of candidates in wheel
        cndptrn = [True] * (whlsz + 1) # wheel size plus one extra for o/f
        for p in WHLPRMS:
            strt = p * p - FRSTBP
            while strt < 0: strt += p
            cndptrn[strt::p] = [False] * ((len(cndptrn) - strt - 1) // p + 1) # cull for p
        def getnxtcndndx(i):
            j = i + 1
            while not cndptrn[j]: j += 1
            return j
        def getgaps():
            i = 0;
            for h in xrange(whlhts):
                j = getnxtcndndx(i)
                yield j - i; i = j
        return bytearray(getgaps())
    GAPS = makeGAPS()
    
    def primes():
        def wheel_prime_pairs():
            yield (FRSTBP, 0)                           # avoid initialization race condition
            bps = wheel_prime_pairs()              # additional primes supply, recursion required but not deep
            p, pi = next(bps); q = p * p                # adv to get 11 sqr'd is 121 as next square to put
            nxtCmpsts = {}; n = FRSTBP + GAPS[0]; ni = 0 #  into sieve dict; init cndidate, wheel ndx
            if len(GAPS) > 1: ni = 1
            while True:                                 # Python does not recurse well do not required here
                if n not in nxtCmpsts:              # is not a multiple of previously recorded primes
                    if n < q: yield (n, ni)             # n is prime with wheel modulo index
                    else:                                    # n == p * p: put next cull position on wheel
                        nxt = q + p * GAPS[pi]; pi += 1 # advance wheel index
                        if pi >= len(GAPS): pi = 0
                        while nxt in nxtCmpsts:         # ensure each entry is unique by wheel
                            nxt += p * GAPS[pi]; pi += 1 # advance wheel index
                            if pi >= len(GAPS): pi = 0
                        nxtCmpsts[nxt] = (p, pi)        # next non-marked multiple of a prime
                        p, pi = next(bps); q = p * p    # advance next prime and prime square to compare
                else:
                    pc, pci = nxtCmpsts.pop(n)          # move current cull position up the wheel
                    nxt = n + pc * GAPS[pci]; pci += 1  # advance wheel index
                    if pci >= len(GAPS): pci = 0
                    while nxt in nxtCmpsts:             # ensure each entry is unique by wheel
                        nxt += pc * GAPS[pci]; pci += 1 # advance wheel index
                        if pci >= len(GAPS): pci = 0
                    nxtCmpsts[nxt] = (pc, pci)         # next non-marked multiple of a prime
                n += GAPS[ni]; ni += 1                  # advance on the wheel
                if ni >= len(GAPS): ni = 0
        for p in WHLPRMS: yield p                       # root primes
        for p, pi in wheel_prime_pairs(): yield p   # strip out indexes
    
    def first_n_primes(n):
        count = 0; gen = primes()
        while True:        
            if count >= n: break
            count += 1; yield gen.next()
    
    numPrimes = 10000000 # testing
    from time import clock
    elpsd = -clock()
    lastPrime = 0
    for p in first_n_primes(numPrimes): lastPrime = p
    elpsd += clock()
    
    print("The " + str(numPrimes) + "th prime is " + str(lastPrime) + " in " + str(int(elpsd * 1000)) + " milliseconds.")
    

    The above code generates the following output (the couple of seconds used to generate the huge wheel isn’t included in the time):

    The 10000000th prime is 179424673 in 42533 milliseconds.
    

    it performs very close to its theoretical performance for a large range of primes.

  7. Willy Good said

    The following code is an improvement to the above code in that it reduces the overhead time to generate the GAPS byte array (replaces “makeGAPS()”) by very nearly a factor of four from well over two seconds to well under one second on my machine; It won’t affect the timing results as the overhead time is excluded from the timing results:

    FRSTBP = 23; WHLPRMS = [ 2, 3, 5, 7, 11, 13, 17, 19 ]               # base wheel primes
    def makeGAPS():
        whlsz = reduce(lambda o, x: o * x, WHLPRMS, 1) # length of the repeating wheel
        whlhts = reduce(lambda o, x: o * (x - 1), WHLPRMS, 1) # number of candidates in wheel
        cndptrn = [True] * (whlsz + 1) # wheel size plus one extra for o/f
        for p in WHLPRMS:
            strt = p * p - FRSTBP
            while strt < 0: strt += p
            cndptrn[strt::p] = [False] * ((len(cndptrn) - strt - 1) // p + 1) # cull for p
        def getnxtcndndx(i):
            j = i + 1
            while not cndptrn[j]: j += 1
            return j
        def getgaps():
            i = 0;
            for h in xrange(whlhts):
                j = getnxtcndndx(i)
                yield j - i; i = j
        return bytearray(getgaps())
    GAPS = makeGAPS()
    
  8. Willy Good said

    Sorry, I selected the wrong version above, and submitted the original version:

    The following code is an improvement to the above code in that it reduces the overhead time to generate the GAPS byte array (replaces “makeGAPS()”) by very nearly a factor of four from well over two seconds to well under one second on my machine; It won’t affect the timing results as the overhead time is excluded from the timing results:

    FRSTBP = 23; WHLPRMS = [ 2, 3, 5, 7, 11, 13, 17, 19 ]               # base wheel primes
    def makeGAPS():
        gaps = bytearray([1]) # initial gap set
        for p in WHLPRMS:
            def whlgapsitr():
                frstprm = p + gaps[0]
                n = frstprm; ni = 1; ag = 0; nxtcmpst = p * p; ci = 0
                for i in xrange(len(gaps) * p):
                    if ni >= len(gaps): ni = 0
                    g = gaps[ni]; ni += 1
                    n += g; ag += g
                    if n < nxtcmpst: yield ag; ag = 0
                    else: nxtcmpst += p * gaps[ci]; ci += 1
            gaps = bytearray(whlgapsitr())
        return gaps
    GAPS = makeGAPS()
    
  9. Willy Good said

    Since this website likes Scheme code, here is a version of the Richard Bird’s sieve from Epilogue of the same Melissa E. O’Neill artlcle. Although it is quite a bit slower than more imperative algorithms or even incremental sieves using a Priority Queue or HashTable, it is acceptably fast for smaller ranges. It can be sped up by using tree folding (https://www.haskell.org/haskellwiki/Prime_numbers#Tree_merging) and/or gains a constant factor in speed using wheel factorization. The code is as follows:

    (define (birdPrimes)
      (define (mltpls p)
        (define pm2 (* p 2))
        (let nxtmltpl ((cmpst (* p p)))
          (cons cmpst (lambda () (nxtmltpl (+ cmpst pm2))))))
      (define (allmltpls ps)
        (cons (mltpls (car ps)) (lambda () (allmltpls ((cdr ps))))))
      (define (merge xs ys)
        (let ((x (car xs)) (xt (cdr xs)) (y (car ys)) (yt (cdr ys)))
          (cond ((< x y) (cons x (lambda () (merge (xt) ys))))
                ((> x y) (cons y (lambda () (merge xs (yt)))))
                (else (cons x (lambda () (merge (xt) (yt))))))))
      (define (mrgmltpls mltplss)
        (cons (car (car mltplss))
              (lambda () (merge ((cdr (car mltplss)))
                                (mrgmltpls ((cdr mltplss)))))))
      (define (minuscmpsts n cmps)
        (if (< n (car cmps))
          (cons n (lambda () (minuscmpsts (+ n 2) cmps)))
          (minuscmpsts (+ n 2) ((cdr cmps)))))
      (letrec ((cmpsts (lambda () (mrgmltpls (allmltpls (oddprms)))))
               (oddprms (lambda () ;; prevent initialization race by presetting 3...
                          (cons 3 (lambda () (minuscmpsts 5 (cmpsts)))))))
        (cons 2 (lambda () (oddprms)))))
    
    (define (nthPrime n)
      (let nxtprm ((cnt 0) (ps (birdPrimes)))
        (if (< cnt n) (nxtprm (+ cnt 1) ((cdr ps))) (car ps))))
    (nthPrime 1000000)
    
  10. Willy Good said

    The above code may need a description of how it works other than stating that “it follows the Richard Bird algorithm from the article”:

    1. As Scheme doesn’t have a built-in Stream, the above code implements Co Inductive Streams (CIS’s) (http://c2.com/cgi/wiki?CoinductiveDataType) using ‘cons’, ‘car’ and ‘cdr’ from lists, with the difference being that the second element of each pair doesn’t contain a link to the next element of the list but rather a lambda function with no arguments (a “thunk”) that is the computation to obtain the next pair. In a similar means one could use ‘delay’ and ‘force’ to produce a lazy list or lazy stream, but this algorithm does’t need the extra overhead implied by the memoization of ‘force’ in remembering the value so it isn’t computed again on repeated calls, as the above algorithm only passes through each stream once. The difference in its use once constructed is that in order to obtain the next element, instead of calling (cdr element) directly, one calls ((cdr element)) (double brackets) in order to cause the thunk to be applied. As used, these are infinite CIS’s as the algorithm produces an “infinite” stream of primes (or at least until the computations start to overflow – quite quickly for 32-bit versions of Scheme; not for a very long time for 64-bit versions).

    2. There are several streams used as follows: an allmltpls stream of streams of all the multiples for each base prime, which is merged into a single stream of composites with repeat composites eliminated (important for the “minus” step), the final output stream of primes, and another separate stream of base primes starting at 3. It is this last requirement that means that ‘cmpsts’ and ‘oddprms’ must be defined recursively (‘letrec’) as they call each other since ‘cmpsts’ uses a separate source of ‘oddprms’ and ‘oddprms’ must use ‘cmpsts’ in the ‘minuscmpsts’ operation.

    3. No implementation of a fold right operation is required as that has been wrapped into the ‘mrgmltpls’ operation; no implementation of a mapping operation is required to produce all the multiples of each base prime as that has been combined into the ‘mltpls’ production as a constant addition of twice the prime value series (a ‘scan right’ operation in effect).

    4. Note that the primes generator does not produce a list although the output CIS could be passed to a function that could turn CIS’s into conventional Scheme lists; For most purposes that is not necessary as here in showing the last element(s) of the stream up to a limit – direct processing of the stream is all that is necessary.

    As per the O’Neil article, this algorithm is somewhat slow as to asymptotic computational complexity at O(n (sqrt n) (log log n) / (log n)2) compared to straight imperative array based at O(n log log n) or even the Map/Priority Queue based versions at O(n (log n) (log log n)), but it is quite interesting, not hard to implement (no external implementations of Priority Queue or HashTable necessary), and will keep up to those implementations up to ranges between about 10 million and 100 million. Tree folding will decrease the run times for a give range as it reduces the depth of the merging to about half with most of the operations in the left (shorter) end so as to extend the useful range of these types of sieve. Adding wheel factorization will reduce the execution time by a constant factor of about 4, but of course can be applied to all of the different sorts of algorithms.

  11. programmingpraxis said

    @WillyGood: If you want to use streams, you might want to look at SRFI-41 on the Essays page. I am the author of SRFI-41.

  12. Willy Good said

    A few further notes:

    1. I’ve been doing some work comparing performance of various Scheme implementations and find Gambit-C or Chicken to be about the fastest as well as being fairly fully featured and easy to obtain (I cannot afford or access the actual Chez Scheme compiler – only using the free Petit Chez Scheme byte code interpreter based version – so have no way of comparing its performance), and found some interesting results: Optimized Gambit with “(not safe)” and “(standard-bindings)” declarations “blows Chicken away” for highly recursive continuation style algorithms such as this at about eight times faster than maximally optimized Chicken; however Chicken is over twice as fast implementing very fast tail-call optimized loops such as used in the regular array based sieves – this is likely due to the “Cheney-based” tail cal optimization used by Chicken. Thus Gambit is a better choice for this algorithm and Chicken is a better choice for algorithms such as sieves with very tight recursive loops, where Chickens speed in optimizing these is very close to optimized imperative ‘C’ code; it would be interesting to find out how the full Chez Scheme compiler performs in these.

    2. I have learned that internal (define …)’s are actually equivalent to the (letrec …) form used, as internal ”define’s” are mutually recursive (within limits as long as they don’t modify or depend on defined variables just as for ‘letrec”‘s), but internal (define … ) syntax is likely easier to read and understand; I have thus changed the code to:

    (define (birdPrimes)
      (define (mltpls p)
        (define pm2 (* p 2))
        (let nxtmltpl ((cmpst (* p p)))
          (cons cmpst (lambda () (nxtmltpl (+ cmpst pm2))))))
      (define (allmltpls ps)
        (cons (mltpls (car ps)) (lambda () (allmltpls ((cdr ps))))))
      (define (merge xs ys)
        (let ((x (car xs)) (xt (cdr xs)) (y (car ys)) (yt (cdr ys)))
          (cond ((< x y) (cons x (lambda () (merge (xt) ys))))
                ((> x y) (cons y (lambda () (merge xs (yt)))))
                (else (cons x (lambda () (merge (xt) (yt))))))))
      (define (mrgmltpls mltplss)
        (cons (car (car mltplss))
              (lambda () (merge ((cdr (car mltplss)))
                                (mrgmltpls ((cdr mltplss)))))))
      (define (minuscmpsts n cmps)
        (if (< n (car cmps))
          (cons n (lambda () (minuscmpsts (+ n 2) cmps)))
          (minuscmpsts (+ n 2) ((cdr cmps)))))
      (define (cmpsts) (mrgmltpls (allmltpls (oddprms)))) ;; internal define's are mutually recursive
      (define (oddprms) (cons 3 (lambda () (minuscmpsts 5 (cmpsts)))))  
      (cons 2 (lambda () (oddprms))))
     
    (define (nthPrime n)
      (let nxtprm ((cnt 0) (ps (birdPrimes)))
        (if (< cnt n) (nxtprm (+ cnt 1) ((cdr ps))) (car ps))))
    (nthPrime 1000000)
    
  13. Willy Good said

    Here is the tree folding/merging version in Scheme, showing how easy it is to add the folding with the “pairs” function merged into the “mrgmltpls” function:

    (define (treemergePrimes)
      (define (mltpls p)
        (define pm2 (fx* p 2))
        (let nxtmltpl ((cmpst (fx* p p)))
          (cons cmpst (lambda () (nxtmltpl (fx+ cmpst pm2))))))
      (define (allmltpls ps)
        (cons (mltpls (car ps)) (lambda () (allmltpls ((cdr ps))))))
      (define (merge xs ys)
        (let ((x (car xs)) (xt (cdr xs)) (y (car ys)) (yt (cdr ys)))
          (cond ((fx< x y) (cons x (lambda () (merge (xt) ys))))
                ((fx> x y) (cons y (lambda () (merge xs (yt)))))
                (else (cons x (lambda () (merge (xt) (yt))))))))
      (define (pairs mltplss)
        (let ((tl ((cdr mltplss))))
          (cons (merge (car mltplss) (car tl))
                (lambda () (pairs ((cdr tl)))))))
      (define (mrgmltpls mltplss)
        (cons (car (car mltplss))
              (lambda () (merge ((cdr (car mltplss)))
                                (mrgmltpls (pairs ((cdr mltplss))))))))
      (define (minusstrtat n cmps)
        (if (fx< n (car cmps))
          (cons n (lambda () (minusstrtat (fx+ n 2) cmps)))
          (minusstrtat (fx+ n 2) ((cdr cmps)))))
      (define (cmpsts) (mrgmltpls (allmltpls (oddprms)))) ;; internal define's are mutually recursive
      (define (oddprms) (cons 3 (lambda () (minusstrtat 5 (cmpsts)))))  
      (cons 2 (lambda () (oddprms))))
    

    The above code is about three times faster than the Richard Bird version for the same practical range (empirical performance).

  14. Willy Good said

    @programmingpraxis, your suggestion to use SRFI-41 is all right, but I was already aware of it and decided not to use it, as mentioned in my post, since this algorithm doesn’t require the extra overhead of the memoization implied in using ‘delay’ and ‘force’ to build the “thunk” in the cdr of each pair; also this algorithm didn’t require any of the “higher-order functions” of that SRFI. This is similar to implementing these same algorithms in F# which doesn’t have a built in “lazy list” (one can inport the “PowerPack” for that, if they really need it), but for the same reasons I elected not to do that and used Co-Inductive Streams, just as here

  15. Melissa O’Neill may have proposed the use of a priority queue in 2009, but I first saw the idea as an exercise in Donald Knuth’s The Art of Computer Programming, Vol 3 Exercise 15 of part 5.2.3. The description of the method is in the answers to exercices at pages 641-642 of the 1998 edition.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: