pythonparallel-processingstreamcalculationparallels

Is it possible to solve this problem in parallel for several parameter values in Python?


Below is my task code. In this case e0=15, but I would like to solve this problem for a set of e0 values (e0 - parameter (e0 = 7, 10, 15, 20, 28)). I have a multi-core processor and I would like to distribute the calculations of this task for each parameter e0 to a separate core.

How to do parallel calculations for this task in Python?

import sympy as sp
import scipy as sc
import numpy as np

e0=15
einf=15

def Psi(r,n):
    return 2*np.exp(-r/n)*np.sqrt(sc.special.factorial(n)/sc.special.factorial(-1+n))*sc.special.hyp1f1(1-n, 2, 2*r/n)/n**2

def PsiSymb(n):
    r=sp.symbols('r')
    y1=2*sp.exp(-r/n)*np.sqrt(sc.special.factorial(n)/sc.special.factorial(-1+n))/n**2
    y2 = sp.simplify(sp.functions.special.hyper.hyper([1-n], [2], 2*r/n))
    y=y1*y2
    return y

def LaplacianPsi(n):
    r = sp.symbols('r')
    ydiff = 2/r*PsiSymb(n).diff(r)+PsiSymb(n).diff(r,2)
    ydiffnum = sp.lambdify(r, ydiff, "numpy")
    return ydiffnum

def k(n1,n2):
    yint=sc.integrate.quad(lambda r: -0.5*Psi(r,n2)*LaplacianPsi(n1)(r)*r**2,0,np.inf)
    return yint[0]

def p(n1,n2):
    potC=sc.integrate.quad(lambda r: Psi(r,n2)*(-1/r)*Psi(r,n1)*(r**2),0,np.inf)
    potB1=sc.integrate.quad(lambda r: Psi(r,n2)*(1/einf-1/e0)*((einf/e0)**(3/5))*(-e0/(2*r))*(np.exp(-r*2.23))*Psi(r,n1)*(r**2),0,np.inf)
    potB2=sc.integrate.quad(lambda r: Psi(r,n2)*(1/einf-1/e0)*((einf/e0)**(3/5))*(-e0/(2*r))*(np.exp(-r*2.4))*Psi(r,n1)*(r**2),0,np.inf)
    pot=potC[0]+potB1[0]+potB2[0]
    return pot

def en(n1,n2):
    return k(n1,n2)+p(n1,n2)

nmax=3

EnM = [[0]*nmax for i in range(nmax)]

for n1 in range(nmax):
    for n2 in range(nmax):
        EnM[n2][n1]=en(n1+1,n2+1)

EnEig=sc.linalg.eigvalsh(EnM)

EnB=min(EnEig)
print(EnB)

Solution

  • This is not needed to use multiple cores for this computation. Indeed, the bottleneck is the LaplacianPsi function which recompute the same thing over and over. You can use memoization to fix this. Here is an example:

    import functools
    
    @functools.cache
    def LaplacianPsi(n):
        r = sp.symbols('r')
        ydiff = 2/r*PsiSymb(n).diff(r)+PsiSymb(n).diff(r,2)
        ydiffnum = sp.lambdify(r, ydiff, "numpy")
        return ydiffnum
    
    # The rest is the same
    

    The code can be further optimized since sc.special.factorial(n) / sc.special.factorial(-1+n) is actually just n and np.sqrt is inefficient on scalar so it should be replaced with math.sqrt(n). This results in a code taking only 0.057 seconds as opposed to 16.5 seconds for the initial implementation on my machine. This means the new implementation is 290 times faster while it produces the same result!

    Directly using many cores would just have wasted more resources for a slower result. You can still try to use more cores to compute this with the faster provided implementation though it might not be significantly faster.