python-3.xdifferential-equationsjitcode-jitcdde-jitcsde

Is there a way to supply a numerical function to JiTCODE’s function argument instead of symbolic one?


I am getting a function (a learned dynamical system) through a neural network and want to pass it to JiTCODE to calculate trajectories, Lyapunov exponents, etc. As per the JiTCODE documentation, the function f has to be a symbolic function. Is there any way to change this since ultimately JiTCODE is going to lambdify the symbolic function?

Basically, this is what I'm doing right now:

# learns derviates from the Neural net model
# returns an array of numbers [\dot{x},\dot{y}] for input [x,y]
learned_fn = lambda t, y0: NN_model(t, y0) 

ODE = jitcode_lyap(learned_fn, n_lyap=2)
ODE.set_integrator("vode")

Solution

  • First beware that JiTCODE does not take regular functions like your learned_fn as an input. It takes either iterables of symbolic expressions or generator functions returning symbolic expressions. This is why your example code will likely produce an error.

    What you are asking for

    You can “inject” any derivative with the right signature into JiTCODE by changing the f property and telling it that it failed compiling the actual derivative. Here is a minimal example doing this:

    from jitcode import jitcode, y
    
    ODE = jitcode([0])
    ODE.f = lambda t,y: y[0]
    ODE.compile_attempt = False
    ODE.set_integrator("dopri5")
    ODE.set_initial_value([1],0.0)
    
    for time in range(30):
        print(time,*ODE.integrate(time))
    

    Why you probably do not want to do this

    Ignoring Lyapunov exponents for a second, the entire point of JiTCODE is to hard-code your derivative for you and pass it to SciPy’s ode or solve_ivp who perform the actual integration. Thus the above example code is just an overly complicated way of passing a function to one SciPy’s standard integrators (here ode), with no advantage. If your NN_model is very efficiently implemented in the first place, you may not even gain a speed boost from JiTCODE’s auto-compilation.

    The main reason to use JiTCODE’s Lyapunov-exponent capabilities is that it automatically obtains the Jacobian and the ODE for the tangent-vector evolution (needed for the Benettin method) from the symbolic representation of the derivative. Without a symbolic input, it cannot possibly do this. You could theoretically inject a tangent-vector ODE as well, but then again you would leave little for JiTCODE to do and you would probably better off using SciPy’s ode or solve_ivp directly.

    What you probably need

    If you want to use JiTCODE, you need to write a small piece of code that translates the output of your neural-network training to a symbolic representation of your ODE as needed by JiTCODE. This is probably much less scary than it sounds. You just need to obtain the trained coefficients and insert it in the equations of the general form of the neural network.

    If you are lucky and your NN_model fully supports duck typing (and ), you may do something like this:

    from jitcode import t,y
    n = 10 # dimension of your ODE
    NN_input = [y(i) for i in range(n)]
    learned_fn = NN_model(t,NN_input)[1]
    

    The idea is that you feed NN_model once with abstract symbolic input (t and NN_input). NN_model then once acts on this abstract input providing you an abstract result (here you need the duck-typing support). If I interpreted the output of your NN_model correctly, the second component of this result should be the abstract derivative as required by JiTCODE as an input.

    Note that your NN_model appears to expect dimensions to be indices, but JiTCODE’s y expects dimensions to be function arguments. Thus you cannot just choose NN_input = y, but you have to transform it as above.