pythonscipy-optimizescipy-optimize-minimize

Using scipy.optimize.minimize to fit the sum of functions to allow individual decomposition


I have some x, y and y_error data

The ranges and for my data are roughly on orders of

Plotting this yields the following graph:

plot of data

I am now trying to fit a function(s) to this data.

Based on some (irrelevant to this question) background information, I am assuming that the data is best described by the sum of two functions. Which I want to fit (the sum of both) and then, using scipy.optimize.minimize to return the free parameters in the total function, I wish to then plot the individual functions by inputting the constrained free parameters into them.

The (total) function I am trying to fit takes the form

y = y_e * exp(-b((x/x_e)^(1/n) -1) + yO * exp(-x / h)

where b is just some constant * n and the free parameters that I wish to constrain are: y_e, x_e, n, yO and h.

After constraining these, I can plot both functions separately by substituting the free params into each function.

To do this, I am using scipy.optimize.minimize to maximise the negative likelihood function (which I am under the impression is the best way to approach this problem but if anyone has any other insights I am open to suggestions!)

My code is as follows, first my log likelihood function:

    def se_log_likelihood(params, x, y, y_error):
    
        y_e, xe, n, yO, h = params #free parameters which we want to constrain
    
        b = 1.9992*n - 0.3271
    
        model = yO * np.exp(-x / h) + y_e * np.exp(-b * ((x/x_e)**(1/n) - 1))
    
        sigma2 = y_error**2 #variance
    
        #log_likelihood function 
    
        log_likelihood = -0.5 * np.sum((I - model)**2 / sigma2 + np.log(2 * np.pi * sigma2))
    
        return log_likelihood

Then, the negative log likelihood

    nll_se = lambda *args: - se_log_likelihood(*args) #NLL function

Next, minimizing the function

    initial = np.array([5*10**-10, 10, 10, 5*10**-12,12]) #initial estimates for our parameters, 


    soln = minimize(nll_se, initial,args=(x_data ,y_data,y_error_data)) 

However, when I look at the returned values of the free parameters I am trying to constrain in the soln.x array, they are exactly the same as the initial constraints.

The message received is as follows (note, nit = 0), along with the plot produced:

Optimised parameters: 
I0 = 5e-12
h = 12.0
Ie = 5e-10
re = 10.0
n = 10.0
full solution   message: Desired error not necessarily achieved due to precision loss.
  success: False
   status: 2
      fun: 75906.2533828226
        x: [ 5.000e-10  1.000e+01  1.000e+01  5.000e-12  1.200e+01]
      nit: 0
      jac: [-1.423e+10 -1.104e+00  1.385e+01  2.483e+21 -9.133e+03]
 hess_inv: [[1 0 ... 0 0]
            [0 1 ... 0 0]
            ...
            [0 0 ... 1 0]
            [0 0 ... 0 1]]
     nfev: 204
     njev: 32

plot produced

i.e. the solver fails. Just doing some googling, some suggested to change the method used to 'Nelder-mead' in my minimise function.

Doing so changes the success of the solver to True with the message

Optimised parameters: 
I0 = 5.621727474995023e-12
h = 12.956218130275984
Ie = 7.582365424198714e-21
re = 2.7556515677652014
n = 10.0
full solution        message: Optimization terminated successfully.
       success: True
        status: 0
           fun: 2808.1545374167276
             x: [ 7.582e-21  2.756e+00  1.000e+01  5.622e-12  1.296e+01]
           nit: 340
          nfev: 589
 final_simplex: (array([[ 7.582e-21,  2.756e+00, ...,  5.622e-12,
                         1.296e+01],
                       [ 7.587e-21,  2.756e+00, ...,  5.622e-12,
                         1.296e+01],
                       ...,
                       [ 7.453e-21,  2.756e+00, ...,  5.622e-12,
                         1.296e+01],
                       [ 7.491e-21,  2.756e+00, ...,  5.622e-12,
                         1.296e+01]]), array([ 2.808e+03,  2.808e+03,  2.808e+03,  2.808e+03,
                        2.808e+03,  2.808e+03]))

However, the plot produced is as follows:

enter image description here

When the plot I expect and am trying to get is something like

enter image description here

where the thick black line is the combined model I have fit and the dashed lines are the separate models that are fit after extracting free parameter values from the total model minimisation.

I do not understand enough about the scipy minimize solver yet to know how to go about fixing this or what the issue is - whether it is an issue with the scale of my data perhaps? i.e. small values and an exponential function maybe? But if anyone has any ideas or can give some advice I would appreciate it.


Solution

  • First, lets create a synthetic dataset to replace the missing dataset:

    import numpy as np
    import matplotlib.pyplot as plt
    from scipy import optimize
    
    def func1(x, n, xe, ye):
        b = 1.9992 * n - 0.3271
        return ye * np.exp(-b * np.power(x / xe, 1. / n) - 1.)
    
    def func2(x, y0, h):
        return y0 * np.exp(-x / h)
    
    def model(x, n, xe, ye, y0, h):
        return func1(x, n, xe, ye) + func2(x, y0, h)
    
    scale = 1e-11
    p = (4.2, 11.2, 56 * scale, 0.52 * scale, 12.1)
    
    np.random.seed(12345)
    xexp = np.logspace(-2, np.log10(40), 50, base=10)
    sigma = 1e-13
    sexp = np.full(xexp.shape, sigma)
    noise = sexp * np.random.normal(size=xexp.size)
    yth = model(xexp, *p) 
    yexp = yth + noise
    

    Then we create the associated loss function and we scale data to ensure proper convergence of parameters:

    def loss_factory(x, y, s):
        def wrapped(p):
            return 0.5 * np.sum(
                np.power((y - model(x, *p)) / s, 2.)
            )
        return wrapped
    
    loss = loss_factory(xexp, yexp / scale, sexp / scale)
    

    Now we can minimize the loss function with the help of boundaries:

    sol = optimize.minimize(
        loss,
        x0=(10, 10, 50, 1, 10),
        bounds=[
            (1e+0, 2e1),
            (1e-1, 2e1),
            (1e-1, 1e2),
            (1e-1, 2e1),
            (1e-1, 2e1),
        ],
    )
    
    #       fun: 26.234799581905154
    #  hess_inv: <5x5 LbfgsInvHessProduct with dtype=float64>
    #       jac: array([ 0.01446452, -0.00025722, -0.0002693 ,  0.035147  ,  0.00058797])
    #   message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
    #      nfev: 606
    #       nit: 62
    #      njev: 101
    #    status: 0
    #   success: True
    #         x: array([ 4.23351444, 12.04435958, 56.49516374,  0.51168294, 11.79830589])
    

    Which seems to be a fair fit, we can estimate model and sub functions:

    xlin = np.logspace(np.log10(xexp.min()), np.log10(xexp.max()), 200, base=10)
    yhat = model(xlin, *sol.x) * scale
    f1hat = func1(xlin, *sol.x[:3]) * scale
    f2hat = func2(xlin, *sol.x[3:]) * scale
    
    fig, axe = plt.subplots()
    axe.scatter(xexp, yexp, marker='.')
    axe.loglog(xlin, yhat)
    axe.plot(xlin, f1hat, "--")
    axe.plot(xlin, f2hat, "--")
    axe.grid()
    

    For this setup of parameters we can clearly see the behaviour you were expecting:

    enter image description here

    Now if you drastically increase the n parameter the orange curve will drop down intensely. The fit is still fair, but the orange curve is flat and several decades smaller than the green curve:

    p = (10., 11.2, 56 * scale, 0.52 * scale, 12.1)
    
    #       fun: 27.20550074340622
    #  hess_inv: <5x5 LbfgsInvHessProduct with dtype=float64>
    #       jac: array([-0.00020606, -0.00121076,  0.00017231, -0.00311182,  0.00013891])
    #   message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
    #      nfev: 198
    #       nit: 29
    #      njev: 33
    #    status: 0
    #   success: True
    #         x: array([ 9.49277836, 10.25974447, 50.04749403,  0.52249643, 11.48385346])
    

    enter image description here