pythonnumpycalculationmpmath

Faster alternatives to mpmath in Python for computations?


I am doing calculations with Python that involve using the sinh function element-wise on a couple 2D Numpy arrays.

Because I got overflow errors with the sinh implementation of NumPy on a couple arrays, I switched to the mpmath implementation for parts of the calculations that unfortunately does not support vectorisation, so I have to iterate over each array element, which I think makes the calculation much slower than before, because the parts that still use NumPy are much faster while still "effectively" doing the same amount of calculations (few minutes with NumPy vs hours with mpmath).

I am currently using the following loop to apply the mpmath sinh function element-wise on the 2D numpy array:

import numpy as np
import mpmath as mp

y1=np.linspace(0,h,250,False) 
y2=np.linspace(h,b,800)
y=np.concatenate((y1,y2))

chi_1n=np.arange(1,N+1)*np.pi/(a)
y_zone1=np.reshape(y[y<=h],(1,-1))
chi1_y1=chi_n1*y_zone1

A1=chi_1n_t*h_vector1
sinhZ1=np.empty(A1.shape,dtype=object)
    for i in range(0,A1.shape[0]):
        for j in range(0,A1.shape[1]):
            sinhZ1[i,j]=mp.sinh(A1[i,j])

a,b and h are just some float variables.

Now with NumPy, I could just dump the 2D matrix into the function and it would return a value much faster than mp.sinh, albeit with an overflow.

sinhZ1=np.sinh(A1)

Is there a way to make the calculations quicker? I thought there may be a Python library that supports vectorisation for its sinh function AND very large numbers.

Making the arguments artifically smaller by capping them would be not ideal, because the array is used as a basis for a series.

Any help is appreciated!


Solution

  • As requested in the comments, you can compute the log of the sinh function without overflow like:

    import numpy as np
    from scipy.special import logsumexp
    
    def logsinh(x):
        ones = np.ones_like(x)
        return logsumexp([x, -x], b=[ones, -ones], axis=0) - np.log(2)
    

    The log-sum-exp trick implemented by scipy.special.logsumexp computes the log of the sum of exponentials while avoiding over/underflow. The first argument is just the log of the terms in the numerator of sinh, the parameter b is a scaling factor that lets us do subtraction instead of addition, and -np.log(2) is the equivalent of division by 2 within the log function.

    Comparing performance against mpmath.sinh:

    from mpmath import mp
    mp.dps = 16
    
    rng = np.random.default_rng()
    x = rng.random(size=(100000))*10000
    
    mpsinh = np.vectorize(mp.sinh)
    mplog = np.vectorize(mp.log)
    
    %timeit logsinh(x) # 13.1 ms ± 1.92 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
    %timeit mpsinh(x)  # 1.16 s ± 287 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
    
    np.testing.assert_allclose(logsinh(x), mplog(mpsinh(x)).astype(np.float64), rtol=5e-15)
    

    If you need to work with negative x, you'll have to use complex dtype. log(-x) has the same real part of log(x) but has imaginary part pi.

    np.testing.assert_allclose(logsinh(-x + 0j), logsinh(x) + np.pi*1j)