pythonsigmoid

optimal way of defining a numerically stable sigmoid function for a list in python


For a scalar variable x, we know how to write down a numerically stable sigmoid function in python:

def sigmoid(x):
    if x >= 0:
        return 1. / ( 1. + np.exp(-x) )
    else:
        return exp(x) / ( 1. + np.exp(x) )

For a list of scalars, say z = [x_1, x_2, x_3, ...], and suppose we don't know the sign of each x_i beforehand, we could generalize the above definition and try:

def sigmoid(z):
    result = []
    for x in z:
        if x >= 0:
            result.append(1. / ( 1. + np.exp(-x) ) )
        else:
            result.append( exp(x) / ( 1. + np.exp(x) ) )
    return result

This seems to work. However, I feel this is perhaps not the most pythonic way. How should I improve the definition in terms of 'cleanness'? Say, is there a way to use comprehension to shorten the function definition?

I'm sorry if this has been asked, because I cannot find similar questions on SO. Thank you very much for your time and help!


Solution

  • Fully correct answer (no warnings) was provided by @hao peng but solution wasn't explained clearly. This would be too long for a comment, so I'll go for an answer.

    Let's start with analysis of a few answers (pure numpy answers only):

    @DYZ accepted answer

    This one is correct mathematically but still gives us a warning. Let's look at the code:

    def sigmoid(x):
        return np.where(
                x >= 0, # condition
                1 / (1 + np.exp(-x)), # For positive values
                np.exp(x) / (1 + np.exp(x)) # For negative values
        )
    

    As both branches are evaluated (they are arguments, they have to be), the first branch will give us a warning for negative values and the second for positive.

    Although the warnings will be raised, results from overflows will not be incorporated, hence the result is correct.

    Downsides

    @ynn answer

    This one is almost correct, BUT will work only on floating point values, see below:

    def sigmoid(x):
        return np.piecewise(
            x,
            [x > 0],
            [lambda i: 1 / (1 + np.exp(-i)), lambda i: np.exp(i) / (1 + np.exp(i))],
        )
    
    
    sigmoid(np.array([0.0, 1.0]))  # [0.5 0.73105858] correct
    sigmoid(np.array([0, 1]))  # [0, 0] incorrect
    

    Why? Longer answer was provided by @mhawke in another thread, but the main point is:

    It seems that piecewise() converts the return values to the same type as the input so, when an integer is input an integer conversion is performed on the result, which is then returned.

    Downsides

    Improved @hao peng answer

    Idea of stable sigmoid comes from the fact that:

    sigmoid

    Both versions are equally efficient in terms of operations if coded correctly (one exp evaluation is enough). Now:

    Hence we have to branch on x equal to zero. Using numpy's masking we can transform only the part of array which is positive or negative with specific sigmoid implementations.

    See code comments for additional points:

    def _positive_sigmoid(x):
        return 1 / (1 + np.exp(-x))
    
    
    def _negative_sigmoid(x):
        # Cache exp so you won't have to calculate it twice
        exp = np.exp(x)
        return exp / (exp + 1)
    
    
    def sigmoid(x):
        positive = x >= 0
        # Boolean array inversion is faster than another comparison
        negative = ~positive
    
        # empty contains junk hence will be faster to allocate
        # Zeros has to zero-out the array after allocation, no need for that
        # See comment to the answer when it comes to dtype
        result = np.empty_like(x, dtype=np.float)
        result[positive] = _positive_sigmoid(x[positive])
        result[negative] = _negative_sigmoid(x[negative])
    
        return result
    

    Time measurements

    Results (50 times case test from ynn):

    289.5070939064026 #DYZ
    222.49267292022705 #ynn
    230.81086134910583 #this
    

    Indeed piecewise seems faster (not sure about the reasons, maybe masking and additional masking ops make it slower).

    Code below was used:

    import time
    
    import numpy as np
    
    
    def _positive_sigmoid(x):
        return 1 / (1 + np.exp(-x))
    
    
    def _negative_sigmoid(x):
        # Cache exp so you won't have to calculate it twice
        exp = np.exp(x)
        return exp / (exp + 1)
    
    
    def sigmoid(x):
        positive = x >= 0
        # Boolean array inversion is faster than another comparison
        negative = ~positive
    
        # empty contains juke hence will be faster to allocate than zeros
        result = np.empty_like(x)
        result[positive] = _positive_sigmoid(x[positive])
        result[negative] = _negative_sigmoid(x[negative])
    
        return result
    
    
    N = int(1e4)
    x = np.random.uniform(size=(N, N))
    
    start: float = time.time()
    for _ in range(50):
        y1 = np.where(x > 0, 1 / (1 + np.exp(-x)), np.exp(x) / (1 + np.exp(x)))
        y1 += 1
    end: float = time.time()
    print(end - start)
    
    start: float = time.time()
    for _ in range(50):
        y2 = np.piecewise(
            x,
            [x > 0],
            [lambda i: 1 / (1 + np.exp(-i)), lambda i: np.exp(i) / (1 + np.exp(i))],
        )
        y2 += 1
    end: float = time.time()
    print(end - start)
    
    start: float = time.time()
    for _ in range(50):
        y2 = sigmoid(x)
        y2 += 1
    end: float = time.time()
    print(end - start)