pythonnumerical-methodsode

Differentiating OdeSolution object


I would like to compute the residual of the numerical solution to an ordinary differential equation.

Here is the code

import numpy as np
from scipy.integrate import solve_ivp

def f(t, x): 
    return 0.5 * x

t_start = 0
t_end = 2

n = 50

t = np.linspace(t_start, t_end, 50)

x_init = 1

solution = solve_ivp(f, [t_start, t_end], [x_init], dense_output = True)

Now I would like to numerically differentiate the result at the points t. I tried naively

from scipy.differentiate import derivative

derivative(solution.sol, t)

And get the error that the sizes do not match.

I tried to overcome this by defining

def g(t):
    return solution.sol(t)[0]

derivative(g, t)

and this also does not work.

How to fix these errors?

Added clarification: Please note that I am not asking how to implement numerical differentiation. Or how to find these derivatives using other modules. My questions is as follows: How to use scipy function "derivative" on the output of another scipy function "solve_ivp."


Solution

  • You need to vectorize the OdeSolution interpolator to make it work, or simply use np.gradient from numpy. In both case, the derivative will be assessed numerically.

    import numpy as np
    import matplotlib.pyplot as plt
    from scipy import integrate, differentiate
    

    First we solve the IVP:

    def model(t, y): 
        return 0.5 * y
    
    t = np.linspace(0, 2, 500)
    
    solution = integrate.solve_ivp(model, [t.min(), t.max()], y0=[1.], t_eval=t)
    

    We may assess the gradient numerically using the integrated solution with numpy:

    gradient = np.gradient(solution.y[0], solution.t)
    

    Or we may vectorize the OdeSolution interpolator:

    @np.vectorize
    def proxy(t):
        return solution.sol(t)[0]
    

    Then we can use derivative method of scipy as you wished:

    derivative = differentiate.derivative(proxy, t)
    
    #     success: [ True  True ...  True  True]
    #      status: [0 0 ... 0 0]
    #          df: [ 5.000e-01  5.010e-01 ...  1.356e+00  1.359e+00]
    #       error: [ 1.112e-09  4.324e-09 ...  3.553e-15  2.665e-15]
    #         nit: [3 3 ... 2 2]
    #        nfev: [13 13 ... 11 11]
    #           x: [ 0.000e+00  4.008e-03 ...  1.996e+00  2.000e+00]
    

    In this second version we receive a _RichResult which contains additional information on the numerical estimation.

    For your model, we simply have exponentials:

    fig, axe = plt.subplots()
    axe.plot(solution.t, solution.y[0])
    axe.plot(solution.t, gradient)
    axe.plot(solution.t, derivative.df, "--")
    axe.grid()
    

    As we can see, both methods agree and both are indeed numerical.

    enter image description here