Auto-tuner for a car application: The application may change depending on model of the car, so naturally the objective function is going to change as well. The problem is to tune the parameters to the optimum ones for the specific car model. Input: car model, output: optimum paramters for the application for the specific car model. I want to solve this with optimization
I'm trying to minimize a complex nonlinear function, constrained with two nonlinear constraints, one inequality and one equality constraint. The problem is not bounded per se but i've put bounds on the parameters anyway to help speed up the optimization since I know more or less where the correct parameters lie. Parameters are: [x0,x1,x2,x3]
I've used the scipy.optimize.minimize() function with SLSQP method and found good results when the problem is bounded correctly. Although, the scipy.optimize.minimize() function is a local optimizer and solves QP problems which I don't think that my problem is. I've therefore started using a global optimization method with mystic,(mystic.differential_evolution). Since I'm not an expert in global optimization I naturally have some questions.
If I choose the bounds too wide the optimizer (mystic.differential_evolution) will stop iterating after a while and print:
STOP("ChangeOverGeneration with {'tolerance': 0.005, 'generations': 1500}")
When I run the solution that the optimizer found, I see that the result is not as good as if I were to lower(shrink) the bounds. Obviously the global optimizer has not found the global optimum, yet it stopped iterating. I know that there are a multiple parameter sets that yield the same global minimum.
Since the objective function may change with the car model, I want the bounds to remain relativley broad in case the global optimum changes which would change the correct parameters.
def optimize_mystic_wd(T, param_wp, w_max, safe_f):
# mystic
import mystic
from mystic.monitors import VerboseLoggingMonitor
from mystic.penalty import quadratic_inequality
from mystic.penalty import quadratic_equality
from mystic.solvers import diffev2
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# tools
from mystic.tools import getch
import pylab
import numpy as np
def objective(x):
from model_func import model
[val, _] = model(param_wp, x)
return -val
def penalty1(x): # <= 0.0
t = np.linspace(0, T, 100)
wd = (x[0] * np.sin(x[1] * t + x[3]) + x[2])
index = np.argmax(wd)
t_max = t[index]
return ((x[0] * np.sin(x[1] * t_max + x[3]) + x[2])) -2*np.pi
def penalty2(x): # == 0.0
return x[0] * (np.cos(x[3]) - np.cos(x[1] * T + x[3])) / x[1] + x[2] * T - 2 * np.pi
@quadratic_inequality(penalty1, k=1e12)
@quadratic_equality(penalty2, k=1e12)
def penalty(x):
return 0.0
b1 = (0, 2*np.pi)
b2 = (0, 2 * np.pi/(2*T))
b3 = (0, 2*np.pi)
b4 = (0, 2*np.pi/T)
bounds = [b1, b2, b3, b4]
stepmon = VerboseLoggingMonitor(1,1)
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=penalty, npop=40, gtol=1500, disp=True, full_output=True, itermon=stepmon, handler=True, retall=True, maxiter=4000)
I'm the mystic
author.
With regard to your questions:
Differential evolution can be tricky. It randomly mutates your candidate solution vector, and accepts changes that improve the cost. The default stop condition is that it quits when ngen
number of steps have occurred with no improvement. This means that if the solver stops early, it's probably not even in a local minimum. There are several ways however, to help ensure the solver has a better chance of finding the global minimum.
ngen
, the number of steps to go without improvement.npop
, the number of candidate solutions each iteration.ngen
.
Personally, I usually use a very large ngen
as the first approach. The consequence is that the solver will tend to run a very long time until it randomly finds the global minimum. This is expected for differential evolution.Yes.
I'm not sure what you mean by the last question. With mystic
, you certainly can broaden your parameter range either at optimizer start, or any point along the way. If you use the class interface (DifferentialEvolutionSolver
not the "one-liner" diffev
), then you have the option to:
Step
the optimizer through the optimization, potentially changing the range (or constraints, or penalties) at any step.Lastly, you might want to look at mystic's ensemble
solvers, which enable you to sample N optimizers from a distribution, each with different initial conditions. In this case, you'd pick fast local solvers... with the intent of quickly searching the local space, but sampling over the distribution helping guarantee you have searched globally. It's like a traditional grid search, but having optimizers start at each point of the "grid" (and using a distribution, and not necessarily a grid).
I might also suggest having a look at this example, which demonstrates how to use mystic.search.Searcher
, whose purpose is to (for example) efficiently keep spawning solvers looking for local minima until you have found all the local minima, and hence the global minimum.