I am trying to use linear constraints with shgo. Here is a simple MWE:
from scipy.optimize import shgo, rosen
# Set up the constraints list
constraints = [{'type': 'ineq', 'fun': lambda x, i=i: x[i+1] - x[i] - 0.1} for i in range(2)]
# Define the variable bounds
bounds = [(0, 20)]*3
# Call the shgo function with constraints
result = shgo(rosen, bounds, constraints=constraints)
# Print the optimal solution
print("Optimal solution:", result.x)
print("Optimal value:", result.fun)
An example solution satisfying these constraints is:
rosen((0.1, 0.21, 0.32))
13.046181
But if you run the code you get:
Optimal solution: None
Optimal value: None
It doesn't find a feasible solution at all! Is this a bug?
@Reinderien showed that the problem is the default sampling method 'simplicial'. Either of the other two options make the optimization work. Concretely, replace the result =
line with
result = shgo(rosen, bounds, sampling_method='halton',constraints=constraints)
and you get
Optimal solution: [1.08960341 1.18960341 1.41515624]
Optimal value: 0.04453888080946618
If you use simplicial
you get
Failed to find a feasible minimizer point. Lowest sampling point = None
although I don't know why.
(Deleted incorrect part of question)
Read the documentation:
Only the COBYLA and SLSQP local minimize methods currently support constraint arguments. If the constraints sequence used in the local optimization problem is not defined in
minimizer_kwargs
and a constrained method is used then the global constraints will be used. (Defining a constraints sequence inminimizer_kwargs
means that constraints will not be added so if equality constraints and so forth need to be added then the inequality functions in constraints need to be added tominimizer_kwargs
too).
shgo
does not support direct linear constraints, only callables. You should usually prefer matrix-based constraints when they're linear, which in this case requires using minimizer_kwargs
instead.
But! You also should not use the default sampling method simplicial which performs poorly on your problem and produces a non-optimal minimum. Either halton or sobol do better, regardless of whether they're provided matrix constraints or callables. Efficiency varies depending on whether you also provide the Jacobian. All methods together:
from functools import partial
import numpy as np
from scipy.optimize import shgo, rosen, LinearConstraint
def spacing(x: np.ndarray, index: int) -> float:
return x[index + 1] - x[index] - 0.1
def spacing_jacobian(x: np.ndarray, index: int) -> np.ndarray:
j = np.zeros(3)
j[index + 1] = 1
j[index] = -1
return j
lin_constraint = {
'minimizer_kwargs': {
'constraints': LinearConstraint(
A=np.array((
(-1, 1, 0),
( 0, -1, 1),
)),
lb=(0.1, 0.1),
),
},
}
call_constraint = {
'constraints': [
{
'type': 'ineq',
'fun': partial(spacing, index=i),
}
for i in range(2)
],
}
j_constraint = {
'constraints': [
{
'type': 'ineq',
'fun': partial(spacing, index=i),
'jac': partial(spacing_jacobian, index=i),
}
for i in range(2)
],
}
for i_constraint, constraint in enumerate((lin_constraint, call_constraint, j_constraint)):
for method in ('sobol', 'halton', 'simplicial'):
result = shgo(
func=rosen,
bounds=[(0, 20)]*3,
sampling_method=method,
**constraint,
)
if result.success:
print(f'{i_constraint} {method:10} '
f'{result.nit:4} {result.nfev:4} {result.fun:8.6f}')
0 sobol 2 334 0.044539
0 halton 2 417 0.044539
0 simplicial 2 24 6.420000
1 sobol 2 490 0.044539
1 halton 2 218 0.044539
2 sobol 2 490 0.044539
2 halton 2 206 0.044539
But repeated runs produce iteration counts all over the place; these methods are non-deterministic. Given that the inter-method performance is a wash, I recommend the matrix-based approach and either sobol or halton. You ask:
If I simply add sampling_method='halton' to my code in the question I get the same answer you do. Does that mean that I did set the constraints correctly?
Correct-ish, but if there's no difference in performance between your callable constraints and matrix constraints, matrix constraints should be preferred as they're inherently simpler to an optimizer and imply the Jacobian with no extra work.
a trivial change to the constraint will make shgo fail for all three sampling methods
No, it doesn't fail; it does exactly what you tell it to.
This constrains the parameters of solutions to be in decreasing order instead of increasing order.
No, it doesn't. Re-examine your math, and invert the coefficient signs for your decision variables.
from functools import partial
import numpy as np
from scipy.optimize import shgo, rosen, LinearConstraint
def spacing(x: np.ndarray, index: int) -> float:
return x[index] - x[index+1] - 0.1
def spacing_jacobian(x: np.ndarray, index: int) -> np.ndarray:
j = np.zeros(3)
j[index + 1] = -1
j[index] = 1
return j
lin_constraint = {
'minimizer_kwargs': {
'constraints': LinearConstraint(
A=np.array((
( 1, -1, 0),
( 0, 1, -1),
)),
lb=(0.1, 0.1),
),
},
}
call_constraint = {
'constraints': [
{
'type': 'ineq',
'fun': partial(spacing, index=i),
}
for i in range(2)
],
}
j_constraint = {
'constraints': [
{
'type': 'ineq',
'fun': partial(spacing, index=i),
'jac': partial(spacing_jacobian, index=i),
}
for i in range(2)
],
}
for i_constraint, constraint in enumerate((lin_constraint, call_constraint, j_constraint)):
for method in ('sobol', 'halton', 'simplicial'):
result = shgo(
func=rosen,
bounds=[(0, 20)]*3,
sampling_method=method,
**constraint,
)
if result.success:
print(f'{i_constraint} {method:10} '
f'{result.nit:4} {result.nfev:4} {result.fun:20.18f} {result.x}')
0 sobol 2 324 0.056257754369320817 [0.89242801 0.79242801 0.62797312]
0 halton 2 192 0.056257656181717651 [0.89243175 0.79243175 0.62794038]
0 simplicial 2 100 0.056257645173465855 [0.89245899 0.79245899 0.62799362]
1 sobol 2 237 0.056257635321605277 [0.89244688 0.79244688 0.62797206]
1 halton 2 198 0.056257635321603119 [0.89244688 0.79244688 0.62797206]
2 sobol 2 237 0.056257635321605104 [0.89244688 0.79244688 0.62797206]
2 halton 2 200 0.056257635321394606 [0.89244691 0.79244691 0.62797208]