I'm trying to determine the supremum numerically of the following function:
p(x) is a probability function with p(0+) > 0 which I further assume to be monotonic decreasing.
The problem is that experimenting with epsilon, I don't get a good value which I can verify with a table with some values. Or rather, there is no fixed epsilon which gives the best result for all instances of p(x).
My approach was to use scipy.optimize.minimize
to find the max value for my function.
def density_upper_scaled(alpha, sigma):
epsilon = 0.1
constraint = {'type': 'ineq', 'fun': constraint_function, 'args': (alpha, sigma)}
guess = np.array([1])
result = sp.optimize.minimize(density_upper_function, guess, bounds=[(epsilon, None)], constraints=constraint)
max_value = -result.fun
return (1.437 * 200**2 / max_value)
#, {'type': 'ineq', 'fun': lambda x: epsilon * x**2}
def density_upper_function(r):
return - r**2 * 0.1
def constraint_function(r, alpha, sigma):
return probability_function(r, alpha, sigma) - 0.1
def probability_function(distance, alpha, sigma):
ro0 = 200 # theoretical communication distance for sigma = 0
if sigma != 0:
p = 0.5 - 0.5 * math.erf((10 / math.sqrt(2)) * (alpha / sigma) * math.log10(distance / ro0))
else:
p = int(distance <= ro0)
return p
My problem is that if epsilon is too small, minimize -> inf.
I then wanted to try this with a simple script myself which sometimes perform better but is worse for sigma/ alpha -> 0 with the following idea. Search for the highest x so that my p(x) is still greater equal my epsilon and the same for epsilon r**2.
condition = probability_function(x, alpha, sigma)
prior = -1
epsilon = 0.2
while condition >= epsilon and condition * x ** 2 >= epsilon:
prior = condition
condition = round(probability_function(x, alpha, sigma),5)
x += 1
print (x, condition)
print(x,condition, 200**2 * 1.437 / (condition * x **2) )
I've also tried to work with the second derivative to find x where the sign is flipped or take the value where it has a maximum but it also doesn't perfectly match the value I need.
Has anyone any idea where my error in thinking is or does anyone know of another method to achieve my goal?
Let's create a MCVE to solve your problem:
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize, special
We draw the probability function from your Wolfram link (feel free to change):
def p(r, alpha=10., sigma=np.sqrt(2.), r0=200):
return 0.5 - 0.5 * special.erf(0.5 * (alpha / sigma) * np.log10(r / r0))
Your model is therefore (where minus sign is to change supremum to infimum as we will minimize not maximize):
def model(x):
# x = (epsilon, r)
return - x[0] * np.power(x[1], 2.)
Without constraint, solution is indeed unbound. Now we add the following Non Linear Constraint:
def constraint(x):
# x = (epsilon, r)
return p(x[1]) - x[0]
nlc = optimize.NonlinearConstraint(constraint, 0., np.inf)
Time to bind everything:
solution = optimize.minimize(
model, x0=[0., 1.],
bounds=[(0., 1.), (0., np.inf)],
constraints=[nlc]
)
Which returns a bound solution:
fun: -19694.23466311135
jac: array([-46597.57519531, -182.46826172])
message: 'Optimization terminated successfully'
nfev: 31
nit: 14
njev: 10
status: 0
success: True
x: array([ 0.42264505, 215.86471535])
We can have a look on the solution to check out it's quality: