pythonmathematical-optimizationlinear-programmingpulpcvxopt

Maximin problem with absolute value (Manhattan/Taxicab Distance)


I have the following problem that I am trying to solve with linear programming.

Given a set of n-dimensional points L and a given search space S, the problem is the following:

formula of the optimization problem

To model it as a linear programming problem, I have

linear programming formalization

First, as

However, when trying to follow as explained in sum of absolute values constraint in semi definite programming to represent the sum of absolute values, I wrote this (implementation in python using PuLP library), but did not manage to get the actual optimal solution:


# Define the dimensionality of the problem
n = len(model) #dimension of the space 
m = len(L) #number of points in L


# Create the LP problem

prob = plp.LpProblem("Maximize minimal manhattan distance", plp.LpMaximize)

# Define the decision variables

x = plp.LpVariable.dicts("x", range(n), cat = 'Continuous')

#Define the variable to maximize
z = plp.LpVariable("z", lowBound=0, cat = 'Continuous')

print(x, z)

# Define the constraints for x, i.e. the search space constraints

for i in range(n):
   if i == 0:
       prob += x[i] >= model[i][0]
       prob += x[i] <= model[i][1]
   else:
       prob += x[i] >= model[i][0] + x[i-1]
       prob += x[i] <= model[i][1] + x[i-1]
       

# Define the constraints for the absolute value of the difference between x and each point in L


for j in range(m): #for every sigma in L
   
   #prob += z <= plp.lpSum([abs(x[i] - L[j][i]) for i in range(n)])  #this is how it could be if abs() was acceptable

   diff = plp.LpVariable.dicts("diff_" + str(j), range(n), cat = 'Continuous')
   diff_plus = plp.LpVariable.dicts("diff_plus" + str(j), range(n), cat = 'Continuous')
   
   for i in range(n):
       #print(x[i])
       prob += diff[i] == L[j][i] - x[i]

       prob += -diff_plus[i] <= diff[i]  <= diff_plus[i]
   
   prob += z == plp.lpSum([diff_plus[i] for i in range(n)])

# Define the objective function

prob += z

# Solve the problem

prob.solve()

To have an example, I could have the set L and the model being:

L = [(0,0,0),(2, 3, 4), (3,6, 7),(4,5, 8), (4,6, 9), (4,7, 10),  (7, 12, 15)]
model = [(0,7),(0,5),(0,3)]

and, in such a case, some (not all) possible solutions could be:

[5.0, 10.0, 12.5], [5.5, 9.5, 12.5], [5.5, 10.0, 12.0], [5.5, 10.5, 11.5], [6.0, 9.5, 12.0], [6.0, 10.0, 11.5], [6.0, 10.5, 11.0], [6.5, 9.0, 12.0], [6.5, 9.5, 11.5], [6.5, 10.0, 11.0], [6.5, 10.5, 10.5], [7.0, 9.0, 11.5], [7.0, 9.5, 11.0], [7.0, 10.0, 10.5]

Can anyone help me understand where I did wrong in rewriting the constraints to represent the sum of absolute values? I tried in every way! Thanks :)


Solution

  • To form the absolute values use something like:

    dplus[i] - dmin[i] = sigma[i]-x[i]
    dplus[i] <= M*b[i]
    dmin[i] <= M*(1-b[i])
    max sum(dplus[i]+dmin[i])
    
    dplus,dmin: positive variables
    b : binary variable
    M = 999  (say)
    

    I ignored here the "forall sigma". I think what you want is:

    dplus[i,L] - dmin[i,L] = sigma[i,L]-x[i]
    dplus[i,L] <= M*b[i,L]
    dmin[i,L] <= M*(1-b[i,L])
    z <= sum(dplus[i,L]+dmin[i,L])  for all L 
    max z