I have a function func(x)
where the argument is a vector of length n
. I would like to minimize it in respect to i
-th component of x
while keeping the other components fixed. So to express it as a function of a single component I would do something like
import numpy as np
from scipy.optimize import minimize_scalar
def func(x):
#do some calculations
return function_value
def func_i(x_i, x0, i):
x = np.copy(x0)
x[i] = x_i
return func(x)
res = minimize_scalar(func_i, args=(x0, i))
Is there a more efficient way of doing this? This kind of calculations will be done repeatedly, cycling over variables, and I worry that x = np.copy(x0)
, x[i] = x_i
slow down the calculation. (The problem emerges in the context of Gibbs sampling, so minimizing in respect to all the variables simultaneously is not what I want.)
One possible go faster option rather than a full copy of x0
is to require the function to put x0
back into the state it found it after making the evaluation.
IOW func_i
becomes:
def func_i(x_i, x0, i)
temp = x0[i]
x0[i] = x_i
result = func(x0)
x0[i] = temp
return result
This avoids the copy to x
but requires 3 extra assignments instead. I don't know the length of your array x0
but I'd hazard a guess that it will be faster than a copy for lengths where n
is large.