pythonmathoptimizationscipymathematical-optimization

scipy minimize with constraints


Let's suppose I have a matrix

arr = array([[0.8, 0.2],[-0.1, 0.14]])

with a target function

def matr_t(t):
    return array([[t[0], 0],[t[2]+complex(0,1)*t[3], t[1]]]

def target(t):
    arr2 = matr_t(t)
    ret = 0
    for i, v1 in enumerate(arr):
          for j, v2 in enumerate(v1):
               ret += abs(arr[i][j]-arr2[i][j])**2
    return ret

now I want to minimize this target function under the assumption that the t[i] are real numbers, and something like t[0]+t[1]=1.


Solution

  • This constraint

    t[0] + t[1] = 1
    

    would be an equality (type='eq') constraint, where you make a function that must equal zero:

    def con(t):
        return t[0] + t[1] - 1
    

    Then you make a dict of your constraint (list of dicts if more than one):

    cons = {'type':'eq', 'fun': con}
    

    I've never tried it, but I believe that to keep t real, you could use:

    con_real(t):
        return np.sum(np.iscomplex(t))
    

    And make your cons include both constraints:

    cons = [{'type':'eq', 'fun': con},
            {'type':'eq', 'fun': con_real}]
    

    Then you feed cons into minimize as:

    scipy.optimize.minimize(func, x0, constraints=cons)