All optimization algorithms for black box functions known to me like simulated annealing or Bayesian optimization deliver the global minimum back.
I am looking for a python algorithm that delivers to me the global and also all local minima back.
Is there any algorithm that solves this task?
Or is there any of the global minimum algorithms that delivers also the local minima back?
Or is there any of the global minimum algorithms that deliver also the local minima back?
I don't understand what you mean by global minimum algorithms, but since you mentioned Simulated Annealing, I will assume that you are talking about metaheuristics with global search capabilities.
There is no guarantee that a metaheuristic, very often is used to solve NP-hard problems, will explore the entire search space, therefore, there is no guarantee to find every local minima. However, I assume that you know that and what you want is to modify a method in a way that it provides rather than just one solution (best found), a list of the local minima found during the procedure to find the global minima.
Single solution based algorithms such as Tabu Search, Iterated Local Search, etc. works based on local search. They perform local search until they find a local optimum solution, and then, they apply their own respective rules to try to escape from the local minima. Let's consider the Iterated Local Search, it performs a local search until the solution S
is locally optimal, then it perturbates the current local optima to escape from it and get another point in the search space to perform the local search again until a criterion is met. In your case, you should keep every time a locally optimal solution found during the search procedure.
The pseudo-code below is an ILS algorithm modified to keep all locally optimal solutions found during the searching procedure.
HillClimbing(S)
while S is not locally optimal do
S ← best(N(S)) // best solution in neighborhood N of solution S
Return S
IteratedLocalSearch()
L ← {} // set of locally optimal solutions
G ← randomSolution()
S ← G
while criterionIsNotMeet() do
HillClimbing(S)
Add S to L // add the current local
if S.objective < G.objective do // minimization
G ← S // best solution found
perturbateSolution(Copy(S))
This kind of algorithms can be easily implemented. If you decide to implement yourself, get a good reference paper, if that is not enough, you can try to find a good implementation on GitHub or Mathworks discovery to base your coding.