pythonmathematical-optimizationlinear-programmingpulpabsolute-value

Maximum of minimum of distances (absolute value in PuLP)


I have the following problem.

I have a finite set L of points in a n-dimensional space, and I have a model.

I want to find the point x, in a specific search space, which maximizes the distance to the set of points d(x, L).

The distance d(x, L) though is the minimum Manhattan distance from x to any of the points σ in L, i.e.:

<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS_HTML-full"></script>

$$d(x, L) = \min_{\sigma \in L} d(x, \sigma) = \min_{\sigma \in L} \sum_{i = 1}^n |x_i - \sigma_i|$$

Therefore, the final optimization problem will be:

<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS_HTML-full"></script>

$$\max_{x \in \text{SearchSpace}} \min_{\sigma \in L} \sum_{i=1}^n |x_i - \sigma_i|$$

Now, I am trying to solve the problem using linear programming, with the PuLP library in python. Though, I am facing problems with the absolute value.

I've been trying in many ways so far, trying to follow what described here sum of absolute values constraint in semi definite programming, but did not manage. This is what I have so far:

# Define the dimensionality of the problem
n = len(model) #dimension of the space 
m = len(L) #number of points in L

# Create the LP problem

prob = plp.LpProblem("Maximize minimal manhattan distance", plp.LpMaximize)

# Define the decision variables

x = plp.LpVariable.dicts("x", range(n), lowBound=0, cat = 'Continuous')

# Define the variable to maximize
z = plp.LpVariable("z", lowBound=0, cat = 'Continuous')

print(x, z)

# Define the constraints for x, i.e. the search space constraints
for i in range(n):
    if i == 0:
        prob += x[i] >= model[i][0]
        prob += x[i] <= model[i][1]
    else:
        prob += x[i] >= model[i][0] + x[i-1]
        prob += x[i] <= model[i][1] + x[i-1]

# Define the constraints for the absolute value of the difference between x and each point in L

# first way, still not working
for j in range(m): #for every sigma in L
    
    #prob += z <= plp.lpSum([abs(x[i] - L[j][i]) for i in range(n)])  
    diff = plp.LpVariable.dicts("diff_" + str(j), range(n), lowBound=0, cat = 'Continuous')
    #y = plp.LpVariable.dicts("y_var" + str(j), range(n), lowBound=0, cat = 'Continuous')
    #w = plp.LpVariable.dicts("w_var" + str(j), range(n), lowBound=0, cat = 'Continuous')
    diff_plus = plp.LpVariable.dicts("diff_plus" + str(j), range(n), lowBound=0, cat = 'Continuous')
    diff_minus = plp.LpVariable.dicts("diff_minus" + str(j), range(n), lowBound=0, cat = 'Continuous')
    
    for i in range(n):
        #print(x[i])
        prob += diff[i] == L[j][i] - x[i]

        prob += diff[i] == diff_plus[i] - diff_minus[i]
        prob += diff_plus[i] >= 0
        prob += diff_minus[i] >= 0
    
    prob += z <= plp.lpSum([diff_plus[i] + diff_minus[i] for i in range(n)])


print(prob)

# Define the objective function

prob += z

# Solve the problem

prob.solve()

# Print the results

print("Status:", plp.LpStatus[prob.status])
print("Objective value:", plp.value(prob.objective))


print("z:", plp.value(z))


optimal_point = [plp.value(x[i]) for i in range(n)]
print('Optimal Point:', optimal_point)

The problematic part is the following:

# Define the constraints for the absolute value of the difference between x and each point in L
for j in range(m): #for every sigma in L
    
    #prob += z <= plp.lpSum([abs(x[i] - L[j][i]) for i in range(n)])  
    diff = plp.LpVariable.dicts("diff_" + str(j), range(n), lowBound=0, cat = 'Continuous')
    #y = plp.LpVariable.dicts("y_var" + str(j), range(n), lowBound=0, cat = 'Continuous')
    #w = plp.LpVariable.dicts("w_var" + str(j), range(n), lowBound=0, cat = 'Continuous')
    diff_plus = plp.LpVariable.dicts("diff_plus" + str(j), range(n), lowBound=0, cat = 'Continuous')
    diff_minus = plp.LpVariable.dicts("diff_minus" + str(j), range(n), lowBound=0, cat = 'Continuous')
    
    for i in range(n):
        #print(x[i])
        prob += diff[i] == L[j][i] - x[i]

        prob += diff[i] == diff_plus[i] - diff_minus[i]
        prob += diff_plus[i] >= 0
        prob += diff_minus[i] >= 0
    
    prob += z <= plp.lpSum([diff_plus[i] + diff_minus[i] for i in range(n)])

because I should first use the new variable z as it is a maximin problem, and then write the absolute values as constraints, which I am not being able to do apparently.

How should I solve it? THX!


Solution

  • Without an example of your input data to test your code, I can't guarantee this solution will work for your problem:

    from __future__ import annotations
    import pulp as plp
    
    
    def find_unique_name(prob: plp.LpProblem, name: str) -> str:
        """Find a unique variable name for the problem.
    
        If a variable with the given name exists, append '_<number>' to it,
        where <number> starts from 1 and increases until a unique name is found.
    
        Parameters
        ----------
        prob : pulp.LpProblem
            The problem to check for existing variable names.
        name : str
            The base name to check for uniqueness.
    
        Returns
        -------
        str
            A unique variable name.
        """
        if name not in prob.variablesDict():
            return name
    
        counter = 1
        while f"{name}_{counter}" in prob.variablesDict():
            counter += 1
        return f"{name}_{counter}"
    
    
    def add_abs(
        prob: plp.LpProblem,
        var: plp.LpVariable | plp.LpAffineExpression,
        big_m: float | int = 100_000,
        abs_var_name: str | None = None,
    ) -> pulp.LpVariable:
        """
        Create an LP variable with the absolute value of a variable or expression.
    
        This function introduces an auxiliary variable to the linear programming
        problem that represents the absolute value of a provided variable or
        expression. It also adds the necessary constraints to ensure this auxiliary
        variable correctly captures the absolute value.
    
        Parameters
        ----------
        prob : pulp.LpProblem
            The optimization problem to which the absolute value variable is added.
        var : pulp.LpVariable | pulp.LpAffineExpression
            The variable or expression whose absolute value is to be represented.
        big_m : float | int, default=100000
            A large constant required to create the auxiliary constraints needed to
            create the variable that equals the absolute value of :param:`var`.
            The value needs to be greater than any value that :param:`var` can have.
        abs_var_name : str | None, optional
            The name for the absolute value variable. If None, a name is generated
            automatically.
    
        Returns
        -------
        pulp.LpVariable
            The auxiliary variable representing the absolute value of the provided
            variable or expression.
    
        Examples
        --------
        The following example demonstrates how the :func:`add_abs` can be used,
        to find the absolute value for the `x` variable, that has a range of:
        :math:`-10 \\leq{} \\text{x} \\leq{} 0`:
    
        >>> prob = pulp.LpProblem("MyProblem", sense=pulp.LpMaximize)
        >>> x = pulp.LpVariable("x", lowBound=-10, upBound=0)
        >>> abs_x = add_abs(prob, x)
        >>> prob.setObjective(abs_x)
    
        >>> print(pulp.LpStatus[prob.solve(pulp.PULP_CBC_CMD(msg=False))])
        'Optimal'
    
        >>> print(f"Objective Value: {prob.objective.value():.0f}")
        'Objective Value: 10'
    
        >>> for name, lpvar in prob.variablesDict().items():
        ...     print(f"{name} = {lpvar.value():.0f}")
        abs(x) = 10
        x = -10
        binary_var(x) = 0
        """
        var_name = "UNKNOWN_NAME"
        if isinstance(var, plp.LpAffineExpression):
            var_name = var.getName()
            if var_name is None:
                var_name = var.__str__()
        elif isinstance(var, plp.LpVariable):
            var_name = var.name
    
        if abs_var_name is None:
            abs_var_name = f"abs({var_name})"
    
        # Create the absolute value variable
        abs_var_name = find_unique_name(prob, abs_var_name)
        abs_var = plp.LpVariable(abs_var_name, lowBound=0)
    
        # Binary variable to determine the sign of 'var'
        binary_var_name = find_unique_name(prob, f"binary_var({var_name})")
        binary_var = plp.LpVariable(binary_var_name, cat=plp.LpBinary)
    
        # Add constraints
        prob += abs_var >= var
        prob += abs_var >= -var
        prob += abs_var <= var + big_m * (1 - binary_var)
        prob += abs_var <= -var + big_m * binary_var
    
        return abs_var
    
    
    # Example for a 2-dimensional problem
    model = [(0, 10), (0, 10)]   # This defines a 2-dimensional search space with each dimension ranging from 0 to 10
    L = [(1, 2), (3, 4), (5, 6)] # This defines 3 points in the 2-dimensional space
    
    # Define the dimensionality of the problem
    n = len(model) # Dimension of the space 
    m = len(L)     # Number of points in L
    
    # Create the LP problem
    prob = plp.LpProblem("Maximize minimal manhattan distance", plp.LpMaximize)
    
    # Define the decision variables
    x = plp.LpVariable.dicts("x", range(n), lowBound=0, cat='Continuous')
    
    # Define the variable to maximize
    z = plp.LpVariable("z", lowBound=0, cat='Continuous')
    
    # Define the constraints for x, i.e. the search space constraints
    for i in range(n):
        if i == 0:
            prob += x[i] >= model[i][0]
            prob += x[i] <= model[i][1]
        else:
            prob += x[i] >= model[i][0] + x[i-1]
            prob += x[i] <= model[i][1] + x[i-1]
    
    # Define the constraints for the absolute value of the difference between x and each point in L
    for j in range(m): # for every sigma in L...
        diff = plp.LpVariable.dicts(f"diff_{j}", range(n), lowBound=0, cat = 'Continuous')
        
        for i in range(n):
            absdiff = add_abs(prob, L[j][i] - x[i], abs_var_name=f"abs_diff({j},{i})")
            prob += diff[i] == absdiff
        prob += z <= plp.lpSum([diff[i] for i in range(n)])
    
    
    # Define the objective function
    prob.setObjective(z)
    
    # Solve the problem
    prob.solve()
    
    # Print the results
    print("Status:", plp.LpStatus[prob.status])
    print("Objective value:", plp.value(prob.objective))
    print("z:", plp.value(z))
    
    optimal_point = [plp.value(x[i]) for i in range(n)]
    print('Optimal Point:', optimal_point)
    
    # Prints:
    #
    # Status: Optimal
    # Objective value: 19.0
    # z: 19.0
    # Optimal Point: [10.0, 20.0]
    

    Basically, I've created a function called add_abs that returns a variable representing the absolute value of some pulp.LpVariable or pulp.LpAffineExpression. Then I used this function to define the absolute difference of L[j][i] - x[i] from your original code.

    I've then tested the code using the following "dummy" values for model and L: