I am running into an error when trying to run my code for this Assignment Cost minimization problem in Python and was hoping someone could illuminate me on what I have done incorrectly here.
The question:
My code:
import numpy as np
import time
n = 10 # n: number of jobs/workers
cost = np.random.rand(n,n) # generate an n-by-n random matrix
from scipy.optimize import linprog
c = cost.reshape((n**2,1)) # reshape the cost matrix to a column vector
#row constraints (sum of each row equals 1)
A_eq_rows = np.ones((n, n**2))
b_eq_rows = np.ones(n)
#column constraints (sum of each column equals 1)
A_eq_cols = np.zeros((n, n**2))
for i in range(n):
A_eq_cols[i, i::n] = 1
b_eq_cols = np.ones(n)
#solve for A_eq and
A_eq=np.vstack((A_eq_rows, A_eq_cols))
b_eq=np.hstack((b_eq_rows, b_eq_cols))
start_time = time.time() # start timing
res = linprog(c, None, None, A_eq, b_eq)
end_time = time.time() # end timing
elapsed_LP = end_time - start_time
print("Minimum cost = ", res.fun)
print("Elapsed time = ", elapsed_LP)
I keep getting my Minimum Cost as None and am not entirely sure what I've done wrong with this. I have a decent fundamental understanding of how to resolve this problem but have limited experience in doing this through Python.
I know that it has n^2 variables and 2n-1 linearly independent constraints. Therefore, a BFS of the assignment problem has 2n-1 basic variables, and for each assignments, exactly n of the n2 variables are nonzero so every BFS is degenerate. Any help in this would be greatly appreciated.
NOTE: I am aware I can solve this through the function scipy.optimize.linear_sum_assignment()
and have done this but wanted to compare my answer by running this problem as a linear program.
Consider n=3. The LP matrix should look like
x00 x01 x02 x10 x11 x12 x20 x21 x22
1 1 1
1 1 1
1 1 1
1 1 1
1 1 1
1 1 1
As with all network models, the number of nonzeros in each column is 2.
It can be difficult to populate the A matrix, but in this case there is an easy structure:
import numpy as np
n = 3 # n: number of jobs/workers
cost = np.random.rand(n,n) # generate an n-by-n random matrix
from scipy.optimize import linprog
c = cost.reshape((n**2,1)) # reshape the cost matrix to a column vector
# allocate LP matrix
A = np.zeros((2*n,n*n))
# sum_j x[i,j] = 1 for i=0..,n-1
for i in range(n):
for j in range(n):
A[i,n*i+j] = 1.0 # same column mapping as reshape
# sum_i x[i,j] = 1 for j=0..,n-1
for j in range(n):
for i in range(n):
A[n+j,n*i+j] = 1.0 # same column mapping as reshape
# rhs
b = np.ones(2*n)
res = linprog(c, None, None, A, b)
print(res.message)
print("Minimum cost = ", res.fun)
# print assignment
for i in range(n):
for j in range(n):
if (abs(res.x[n*i+j])>0.0001):
print(f'{i}->{j} with cost {cost[i,j]}')
Indeed the A matrix looks like:
array([[1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 1., 1.],
[1., 0., 0., 1., 0., 0., 1., 0., 0.],
[0., 1., 0., 0., 1., 0., 0., 1., 0.],
[0., 0., 1., 0., 0., 1., 0., 0., 1.]])
The matrix interface is a bit primitive and can be difficult to use. A much easier way is to use a modeling tool like PuLP (that automates the mapping of decision variables to LP columns).