I have a non-convex quadratic optimization problem of type
x' * B * x,
where all entries of x are between 0 and 1 and the sum of all entries is identical to 1.
In scipy.optimize I would try to solve this optimization problem via
import numpy as np
from scipy.optimize import minimize, LinearConstraint
N = 2 # dimension 2 for this example
B = np.array([[2,-1],[-1,-1]]) # symmetric, but indefinite matrix
fnc = lambda x: x.T @ B @ x
res = minimize(fnc, x0 = np.ones((N,))/N, bounds = [(0,1) for m in range(N)], constraints = (LinearConstraint(np.ones((N,)),0.99, 1.01)))
So I start with initial guess [0.5, 0.5], I apply bounds (0,1) on each dimension and the equality constraint is handled by a very narrow double inequality constraint.
Now I would like to translate this to mystic because scipy does not work well with high-dimensional non-convex settings (which I am interested in).
What I was not able to find out is how to write the constraints in a form such that I only need to supply the matrix B, with variable dimension. All examples in mystic which I found so far do something like this:
def objective(x):
x0,x1,x2,x3,x4,x5,x6,x7,x8,x9 = x
return x0**2 + x1**2 + x0*x1 - 14*x0 - 16*x1 + (x2-10)**2 + \
4*(x3-5)**2 + (x4-3)**2 + 2*(x5-1)**2 + 5*x6**2 + \
7*(x7-11)**2 + 2*(x8-10)**2 + (x9-7)**2 + 45.0
bounds = [(-10,10)]*10
from mystic.symbolic import generate_constraint, generate_solvers, simplify
from mystic.symbolic import generate_penalty, generate_conditions
equations = """
4.0*x0 + 5.0*x1 - 3.0*x6 + 9.0*x7 - 105.0 <= 0.0
10.0*x0 - 8.0*x1 - 17.0*x6 + 2.0*x7 <= 0.0
-8.0*x0 + 2.0*x1 + 5.0*x8 - 2.0*x9 - 12.0 <= 0.0
3.0*(x0-2)**2 + 4.0*(x1-3)**2 + 2.0*x2**2 - 7.0*x3 - 120.0 <= 0.0
5.0*x0**2 + 8.0*x1 + (x2-6)**2 - 2.0*x3 - 40.0 <= 0.0
0.5*(x0-8)**2 + 2.0*(x1-4)**2 + 3.0*x4**2 - x5 - 30.0 <= 0.0
x0**2 + 2.0*(x1-2)**2 - 2.0*x0*x1 + 14.0*x4 - 6.0*x5 <= 0.0
-3.0*x0 + 6.0*x1 + 12.0*(x8-8)**2 - 7.0*x9 <= 0.0
"""
cf = generate_constraint(generate_solvers(simplify(equations, target=['x5','x3'])))
pf = generate_penalty(generate_conditions(equations))
This is highly verbose and needs manual insertion of all the constraints and parameters etc. as a string which I would like to avoid: The dimensionality and the form of the matrix B will be different each time I need to run the optimization method. What I'd like to have (in a perfect world) would be something like
def objective(x):
return x @ B @ x # numpy syntax
equations = """
np.ones((1,N)) @ x == 1.0
"""
# constraint in a form which can handle variable dimension of x
Is that possible?
Mystic uses lists, by default, so you have to convert to an array in the cost function. There are a lot of other ways to create constraints without using symbolic strings, and in your particular case, there's one that works out of the box. I'd do something like this:
>>> import mystic as my
>>> import numpy as np
>>> N = 2 # dimension 2 for this example
>>> B = np.array([[2,-1],[-1,-1]]) # symmetric, but indefinite matrix
>>> c = my.constraints.normalized()(lambda x:x)
>>> bounds = [(0,1)]*N
>>> mon = my.monitors.VerboseMonitor(10)
>>> fnc = lambda x: np.array(x).T @ B @ x
>>> res = my.solvers.diffev2(fnc, x0=bounds, npop=10, bounds=bounds, ftol=1e-4, gtol=100, full_output=1, itermon=mon, constraints=c)
Generation 0 has ChiSquare: -0.920151
Generation 10 has ChiSquare: -0.999667
Generation 20 has ChiSquare: -1.000000
Generation 30 has ChiSquare: -1.000000
Generation 40 has ChiSquare: -1.000000
Generation 50 has ChiSquare: -1.000000
Generation 60 has ChiSquare: -1.000000
Generation 70 has ChiSquare: -1.000000
Generation 80 has ChiSquare: -1.000000
Generation 90 has ChiSquare: -1.000000
Generation 100 has ChiSquare: -1.000000
Generation 110 has ChiSquare: -1.000000
STOP("ChangeOverGeneration with {'tolerance': 0.0001, 'generations': 100}")
Optimization terminated successfully.
Current function value: -1.000000
Iterations: 113
Function evaluations: 1140
>>> res[0]
array([1.07421473e-07, 9.99999993e-01])
>>> res[1]
-1.0000001999996087
>>> my.scripts.log_reader(mon)