Unlike the traditional sieve of Eratosthenes:
n=10000000
sieve = [True] * n
for i in range(3,int(n**0.5)+1,2):
if sieve[i]:
sieve[i*i::2*i]=[False]*((n-i*i-1)//(2*i)+1)
sieve=[2] + [i for i in range(3,n,2) if sieve[i]]
What is the time complexity of the following enhanced version?
n=10000000
sieve5m6 = [True] * (n//6+1)
sieve1m6 = [True] * (n//6+1)
for i in range(1,int((n**0.5+1)/6)+1):
if sieve5m6[i]:
sieve5m6[6*i*i::6*i-1]=[False]*((n//6-6*i*i)//(6*i-1)+1)
sieve1m6[6*i*i-2*i::6*i-1]=[False]*((n//6-6*i*i+2*i)//(6*i-1)+1)
if sieve1m6[i]:
sieve5m6[6*i*i::6*i+1]=[False]*((n//6-6*i*i)//(6*i+1)+1)
sieve1m6[6*i*i+2*i::6*i+1]=[False]*((n//6-6*i*i-2*i)//(6*i+1)+1)
sieve=[2,3]
for i in range(1,int(n/6)+1):
if sieve5m6[i]:
sieve.append(6*i-1)
if sieve1m6[i]:
sieve.append(6*i+1)
Better version with numpy array functions useful for n> 10 ^ 9:
def sieve(n):
baseW=30
baseP=np.array([-23,-19,-17,-13,-11,-7,-1,1])
C=np.ones((len(baseP),len(baseP)), dtype=int)
C1=np.zeros((len(baseP),len(baseP)), dtype=int)
C=np.multiply(np.dot(np.diag(baseP),C),np.dot(C,np.diag(baseP)))
C=C%baseW
C[C>1] = C[C>1] -baseW
for i in range(len(baseP)) :
C1[i,:]=baseP[np.argwhere(C==baseP[i])[:,1]]
primeB=np.ones((len(baseP),n//baseW+1), dtype=bool)
for j in range(1,int((n**0.5-baseP[0])/baseW)+1):
for i in range(len(baseP)):
if primeB[i,j]:
for i1 in range(len(baseP)):
j1=1-1*i1//(len(baseP)-1)+baseW*j*j+(baseP[i]+C1[i1,i])*j+np.dot(baseP[i],C1[i1,i])//baseW
primeB[i1,j1::(baseW*j+baseP[i])] =False
return np.append([2,3,5],baseP[np.nonzero(primeB.T)[1][len(baseP):]]+baseW*np.nonzero(primeB.T)[0][len(baseP):])
The complexity is the same. The second version may perform faster because it skips more factors but the relationship to n in the same order of magnitude.
In other words you have O(f2) = K * O(f1) = O(f1) where K is a constant and unrelated to n.
If you want something that has a (slightly) better time complexity than the sieve of Eratosthenes, you may want to look at the sieve of Atkin, although it may not have better overall performance due to the higher cost of each iteration.