I found a new primality test below which determines if 1000000007 is prime.
How does its speed compare to other existing primality algorithms?
Does it win the award for most "computationally worthless" primality test?
Thanks.
EDIT
Was able to improve speed using this method described here:
https://math.stackexchange.com/questions/3979118/speedup-primality-test
"So it's enough to go from x=n/2 up to n/2+√n/2. With this improvement, your algorithm will still be somewhat slower than your isPrimeNumber routine--just because calculating a gcd is slower than calculating divisibility. This will be feasible for testing numbers with maybe 15-20 digits, but you would need completely different methods to test something much larger, like the 183-digit number you mention."
// Primality Test
// Every n is prime if all lattice points on x+y=n are visible from the origin.
#include <stdio.h>
#include <stdint.h>
#include <math.h>
uint64_t gcd(uint64_t a, uint64_t b)
{
return (b != 0) ? gcd(b, a % b) : a;
}
int isPrimeNumber(uint64_t n)
{
if (n == 1) return 0;
if (n == 2 || n == 3) return 1;
if (n % 2 == 0) return 0;
// Start near line x=y.
uint64_t x = (n / 2) + 2;
uint64_t y = n - x;
uint64_t count = sqrt(n) / 2;
for (uint64_t i = 0; i < count; ++i) {
// Check lattice point visibility...
if (gcd(x, y) != 1) return 0;
x++; y--;
}
return 1;
}
int main(int argc, char* argv)
{
uint64_t n = 1000000007;
if (isPrimeNumber(n) == 1)
{
printf("%llu prime.", n);
}
else
{
printf("%llu not prime.", n);
}
return 0;
}
When you write any code, you should do basic debugging to verify that your code does what you think it does. Run your code on a few small numbers; print values of x
and y
to verify that it does the correct checks.
In addition to this, if you mix integers and floating-point variables, you should be careful: implicit conversions e.g. from float
to unsigned
can lead to data loss and completely incorrect calculations. Compilers usually warn about this; you should compile with all warnings enabled -Wall
and pay attention to what the compiler says.
It looks like you should always have x + y = n
during your computations - this is your invariant. This can be more easily expressed like this:
// initialization
x = n / 2;
y = n - x;
// it should be evident that your invariant holds here
do {
...
x++; y--;
// your invariant holds here too, by induction
}