## Pollard’s P-1 Factorization Algorithm

### July 21, 2009

Fermat’s little theorem states that for any prime number , and any other number , . Rearranging terms, we have , which means that divides the left-hand side of the equation, thus is a factor of the left-hand side.

John Pollard used this idea to develop a factoring method in 1974. His idea is to choose a very large number and see if it has any factors in common with , thus giving a factor of . A systematic way to test many very large numbers is by taking large factorials, which have many small factors within them. Thus, Pollard’s factorization algorithm is this:

1) Choose , which is known as the *smoothness bound*, and calculate the product of the primes to by .

2) Choose a random integer such that .

3) Calculate . If is strictly greater than 1, then it is a non-trivial factor of . Otherwise, continue to the next step.

4) Calculate . If , then is a non-trivial factor of . If , go to Step 1 and choose a larger . If , go back to Step 2 and choose another .

In Step 4, you can quit with failure instead of returning to Step 1 if becomes too large, where “too large” is probably somewhere around ; in that case, you will need to continue with some other factoring algorithm.

Your task is to write a function that factors integers by Pollard’s method. What are the factors of ? When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.

I wrote a prolog implementation ,which works now and then :), but not

always,most of the time it says ‘out of local stack’.

So i saved the result, factors are [4432676798593, 4363953127297, 127, 43, 3].

I would love to see the prolog code for the above problem.

a must be coprime to p, not any other number.

[…] have studied John Pollard’s p−1 algorithm for integer factorization on two previous occasions, giving first the basic single-stage algorithm and later adding a second stage. In […]

`all_factors`

repeatedly applies a factorizing method`factor`

to`n`

:Besides

`modexp_big_int`

and`random_big_int`

, I need to compute lcms, in this case optimized by knowing that the first factor is always small:Finally, Pollard’s p-1:

This method seems to be much, much slower than Pollard’s Rho.

It shouldn’t be much, much slower than Pollard’s rho. The two methods are complementary. The rho method is generally used to find small factors, say from 4 to 12 digits, after trial division has found factors up to 3 digits. The p-1 method is generally used after the rho method has run for a while, in the hope that you get lucky and find a large factor, say 15 to 20 digits or sometimes more.

It’s been a while since I programmed in the ML family of languages, so I’ll ask some questions instead of making some statements.

1) I see only one bound b. If you look at the more recent p-1 exercise that uses the improved standard continuation, you’ll see a much faster version of the algorithm that uses two bounds b1 and b2.

2) In the pollard_pm1 function, the line let k = …, do you recompute the lcm at each b?

3) I don’t understand the List.fold_right in the line after that. You seem to be taking the product of all the primes less than b, but I’m not sure why.