I want to optimize for 1 parameter only, but as I need constraints, I want to use minimize instead of minimize_scalar.
#get_cl(aircraft,FL,mach,mass = aircraft["mass"]):
cons = [{'type': 'ineq', 'fun': lambda x: (-1* get_cl(aircraft,FL,x,mass)) + max(clrange)},
{'type': 'ineq', 'fun': lambda x: get_cl(aircraft,FL,x,mass) - min(clrange)}]
res = minimize(lambda x: -1 * calc_max_climb_performance(aircraft,x[0],FL,mass),x0=[0.8],bounds=[(.1,.9)],constraints = cons)
So i want to constraint x[0] that the result of the "get_cl" function is less than 1.5. If I run the script, it does not work.
mach,cl: 0.8 0.2940953170932848
mach,cl: 0.8000000149011612 0.2940953061373807
mach,cl: 0.1 18.822100293970227
ValueError: One of the requested xi is out of bounds in dimension 2
In the first loop it starts with x[0] = 0.8 --> this is the x0 starting value In the second loop it changes x[0] slightly. In the 3rd loop it fails. it selects x[0] = 0.1 --> this is the lower limit, ok! But it is way outside of the constraint range. for this parameter combination the lower limit of x[0] is about 0.45. Anyhow, the constraint is not working, the optimizer is disregarding the constrain so the calculation fails.
I want x[0] to be constraint in a way that the result of get_cl is between 0 and 1.5.
What am I doing wrong?
I know that I can use a similar function and calculate new bounds for each value combination, but my problem is more in the understanding of constraints. I have used constraints in the past, and it worked well... so what am I not seeing here?
Thank you
EDIT: I have also tried the constraints call with x[0] instead of x. I assume x[0] is the correct way to do this.
EDIT2: I have created two test´s that you can run to see it yourself
from scipy.optimize import minimize, rosen
def check_fun(x,a,b): # limits positive results for x between a and b
print("x[0]",x)
if a<x<b:
return 1
print("FAILED ---> constraint not respected")
return -1
a = -5 # lower limit
b = 5 # upper limit
cons = [{'type': 'ineq', 'fun': lambda x: check_fun(x[0],a,b)}]
minimize(lambda x: -1*rosen([x[0],a,b]),x0=[1],bounds=[(-10,10)],constraints = cons)
x[0] 1.0
x[0] 1.0
x[0] 1.0
x[0] 1.0000000149011612
x[0] 9.999998249234977
FAILED ---> constraint not respected
x[0] 9.999998249234977
FAILED ---> constraint not respected
x[0] 9.999998264136138
FAILED ---> constraint not respected
x[0] 10.0
FAILED ---> constraint not respected
x[0] 10.0
FAILED ---> constraint not respected
x[0] 9.999999985098839
FAILED ---> constraint not respected
message: Positive directional derivative for linesearch
success: False
status: 8
fun: -1142617.0
x: [ 1.000e+01]
nit: 7
jac: [-4.200e+05]
nfev: 6
njev: 3
from scipy.optimize import minimize, rosen
def check_lower(x,a): # lower limit
print("x[0]=",x)
if (x<a):
print("FAILED lower limit")
return x - a
def check_upper(x,b): # upper limit
if (x>b):
print("FAILED upper limit")
return b - a
a = -5 # lower limit
b = 5 # upper limit
cons = [{'type': 'ineq', 'fun': lambda x: check_lower(x[0],a)},
{'type': 'ineq', 'fun': lambda x: check_upper(x[0],b)}]
minimize(lambda x: -1*rosen([x[0],a,b]),x0=[-0],bounds=[(-10,10)],constraints = cons)
x[0]= 0.0
x[0]= 0.0
x[0]= 0.0
x[0]= 1.4901161193847656e-08
x[0]= -2.0
x[0]= -2.0
x[0]= -1.9999999850988388
x[0]= -4.99628551403187
x[0]= -4.99628551403187
x[0]= -4.996285499130709
message: Optimization terminated successfully
success: True
status: 0
fun: -129849.30693789262
x: [-4.996e+00]
nit: 7
jac: [ 5.989e+04]
nfev: 6
njev: 3
Test1 fails as my real application, the constraints are not considered! Test2 is successfull! I can manipulate a and b and see that the optimizer is limiting it´s "tests" accordingly.
The Test1 check function is pretty crude! it replies -1 for failed attempts. Test2 is more realistic, as the values "tend" to go to negative when reaching the border.
Interestingly, my original constraint function is more like the one in Test2. it simply compares the result of an check-function.
You're having two problems in the code for Test 1.
First, the constraints must be differentiable. You're using SLSQP, and one of the assumptions it makes is that its constraints are differentiable. From the SLSQP paper:
... where the problem functions f : Rn → R and g : Rn → Rm [editor's note: g is the constraint function] are assumed to be continuously differentiable and to have no specific structure. The size of the problem should only be moderately large, mhat < n < 200, for instance, where mhat is the number of active constraints at the solution.
As an intuition for why this matters, suppose the solver is currently in a place where it is violating the constraints. If the constraints are -1 in every direction, then it is unclear which direction to go in to fix the constraints. If the constraints are differentiable, then it can know whether constraints are getting more or less satisfied.
Here is a differentiable version of check_fun()
.
def check_fun(x,a,b): # limits positive results for x between a and b
print("x[0]",x)
if a >= x:
print("FAILED ---> constraint not respected")
return x - a
elif b <= x:
print("FAILED ---> constraint not respected")
return b - x
else:
return 0
If you apply this, then SLSQP still violates the constraints.
The second problem this is having is that the magnitude of the objective function is too big. SLSQP is sensitive to the scaling of its objective function. I have never seen a clear explanation of why this happens, but I have seen in many cases that the "Positive directional derivative for linesearch" error is usually fixed by scaling the objective function.
You can fix this by scaling the objective function to be closer to 1.
minimize(lambda x: -1*rosen_wrapper([x[0],a,b])/1e5,x0=[1],bounds=[(-10,10)],constraints = cons, method='SLSQP')
This is still getting the same thing from minimization: the minimum of f(x) / 100000 is still the minimum of f(x).
Alternatively, you could use trust-constr, which is less sensitive to objective scaling.
Solution by differentiable constraints and objective scaling
from scipy.optimize import minimize, rosen
def check_fun(x,a,b): # limits positive results for x between a and b
print("x[0]",x)
if a >= x:
print("FAILED ---> constraint not respected")
return x - a
elif b <= x:
print("FAILED ---> constraint not respected")
return b - x
else:
return 0
def rosen_wrapper(args):
ret = rosen(args)
print("rosen", args, ret)
return ret
a = -5 # lower limit
b = 5 # upper limit
cons = [{'type': 'ineq', 'fun': lambda x: check_fun(x[0],a,b)}]
minimize(lambda x: -1*rosen_wrapper([x[0],a,b])/1e5,x0=[1],bounds=[(-10,10)],constraints = cons, method='SLSQP')
Solution by differentiable constraints and switching to the trust-constr
solver
# Same as before, except the last line
minimize(lambda x: -1*rosen_wrapper([x[0],a,b]),x0=[1],bounds=[(-10,10)],constraints = cons, method='trust-constr')