pythonperformancemathscipygamma-function

Fast algorithm for log gamma function


I am trying to write a fast algorithm to compute the log gamma function. Currently my implementation seems naive, and just iterates 10 million times to compute the log of the gamma function (I am also using numba to optimise the code).

import numpy as np
from numba import njit
EULER_MAS = 0.577215664901532 # euler mascheroni constant
HARMONC_10MIL = 16.695311365860007 # sum of 1/k from 1 to 10,000,000

@njit(fastmath=True)
def gammaln(z):
"""Compute log of gamma function for some real positive float z"""
    out = -EULER_MAS*z - np.log(z) + z*HARMONC_10MIL
    n = 10000000 # number of iters
    for k in range(1,n+1,4):
        # loop unrolling
        v1 = np.log(1 + z/k)
        v2 = np.log(1 + z/(k+1))
        v3 = np.log(1 + z/(k+2))
        v4 = np.log(1 + z/(k+3))
        out -= v1 + v2 + v3 + v4

    return out

I timed my code against the scipy.special.gammaln implementation and mine is literally 100,000's times slower. So I am doing something very wrong or very naive (probably both). Although my answers are at least correct to within 4 decimal places at worst when compared to scipy.

I tried to read the _ufunc code implementing scipy's gammaln function, however I don't understand the cython code that the _gammaln function is written in.

Is there a faster and more optimised way I can calculate the log gamma function? How can I understand scipy's implementation so I can incorporate it with mine?


Solution

  • The runtime of your function will scale linearly (up to some constant overhead) with the number of iterations. So getting the number of iterations down is key to speeding up the algorithm. Whilst computing the HARMONIC_10MIL beforehand is a smart idea, it actually leads to worse accuracy when you truncate the series; computing only part of the series turns out to give higher accuracy.

    The code below is a modified version of the code posted above (although using cython instead of numba).

    from libc.math cimport log, log1p
    cimport cython
    cdef:
        float EULER_MAS = 0.577215664901532 # euler mascheroni constant
    
    @cython.cdivision(True)
    def gammaln(float z, int n=1000):
        """Compute log of gamma function for some real positive float z"""
        cdef:
            float out = -EULER_MAS*z - log(z)
            int k
            float t
        for k in range(1, n):
            t = z / k
            out += t - log1p(t)
    
        return out
    

    It is able to obtain a close approximation even after 100 approximations as shown in the figure below.

    enter image description here

    At 100 iterations, its runtime is of the same order of magnitude as scipy.special.gammaln:

    %timeit special.gammaln(5)
    # 932 ns ± 19 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
    %timeit gammaln(5, 100)
    # 1.25 µs ± 20.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
    

    The remaining question is of course how many iterations to use. The function log1p(t) can be expanded as a Taylor series for small t (which is relevant in the limit of large k). In particular,

    log1p(t) = t - t ** 2 / 2 + ...
    

    such that, for large k, the argument of the sum becomes

    t - log1p(t) = t ** 2 / 2 + ...
    

    Consequently, the argument of the sum is zero up to second order in t which is negligible if t is sufficiently small. In other words, the number of iterations should be at least as large as z, preferably at least an order of magnitude larger.

    However, I'd stick with scipy's well-tested implementation if at all possible.