pythonscipyscipy-optimizenewtons-method

How to make scipy newton_krylov use a different derivative approximation method


After reading the documentation seems like it uses forward difference as its approximation method, but I can't see any direct way to make it use other method or a custom one.

Using the tools in the documentation I tried this to make it use a custom method and did this implementation to test if the results were the same:

import numpy as np
from scipy.optimize import newton_krylov
from scipy.sparse.linalg import LinearOperator

# Function
def uniform_problem(x, A, b):
    return b - A@x

size = 12

A = np.random.uniform(-1, 1, size=(size, size))
b = np.random.uniform(-1, 1, size=(size, ))

xr = np.random.uniform(-1, 1, size=(size, ))# root
x0 = np.random.uniform(-1, 1, size=(size, ))# initial guess

F = lambda x: uniform_problem(x, A, b) - uniform_problem(xr, A, b)

#Arbitrary parameters
max_iter = 10
tol = 1e-3
h = 1e-4

repeats = 5000
# Using own implementation of Forward Difference

def get_jacobian_vector_product_fdf(F, x, v, h=1e-5):
  step = h * v
  return (F(x + step) - F(x)) / h

error1 = 0
for i in range(repeats):
    x = x0.copy()

    lambdaJv = lambda v: get_jacobian_vector_product_fdf(F, x, v, h)
    linear_operator = LinearOperator((size, size), matvec=lambdaJv)

    solution1 = newton_krylov(F, x, method="gmres", inner_maxiter=max_iter, iter=max_iter, callback=None, f_tol=tol, rdiff=h, inner_M=linear_operator)
    error1 += np.linalg.norm(F(solution1))
error1 /= repeats
print(error1) # aprox 1.659173186802721
# Using no custom method

error2 = 0
for i in range(repeats):
    x = x0.copy()
    
    solution2 = newton_krylov(F, x, method="gmres", inner_maxiter=max_iter, iter=max_iter, callback=None, f_tol=tol, rdiff=h)
    error2 += np.linalg.norm(F(solution2))
error2 /= repeats
print(error2) # aprox 0.024629534404425796
print(error1/error2) # Orders of magnitude of difference

I expected to get the same results, but they are clearly different. I think I'm having trouble understanding what the tools of the documentation do.


Solution

  • I misunderstood, the 'inner_M' parameter does not do what I thought it did, but I found a solution by creating a custom scipy Jacobian class following the same implementation scipy does.

    import numpy as np
    import scipy.sparse.linalg
    from scipy.linalg import norm
    import scipy.optimize._nonlin
    from scipy.optimize._nonlin import Jacobian, _nonlin_wrapper
    
    # Create your jacobian class using the scipy Jacobian interface.
    class KrylovJacobianCustom(Jacobian):
        #... code of custom jacobian ...
        # (In my case, a variant of the Krylov one)
    
    # Make your class visible to the scipy scope.
    scipy.optimize._nonlin.KrylovJacobianCustom = KrylovJacobianCustom
    
    # Define your function using the scipy non linear wrapper.             
    newton_krylov_custom = _nonlin_wrapper('newton_krylov_custom', scipy.optimize._nonlin.KrylovJacobianCustom)
    
    

    This way, this function behaves the same as newton_krylov() or any other non linear solver, but will instead use your implementation of the Jacobian class in the computation. Example:

    def fun(x):
        return [x[0] + 0.5 * x[1] - 1.0, 0.5 * (x[1] - x[0]) ** 2]
    
    sol = newton_krylov_custom(fun, [0, 0])
    >> array([0.66731771, 0.66536458])