pythonnumpysvd

NumPy SVD Does Not Agree With R Implementation


I saw a question about inverting a singular matrix on Stack Overflow using NumPy. I wanted to see if NumPy SVD could provide an acceptable answer.

I've demonstrated using SVD in R for another Stack Overflow answer. I used that known solution to make sure that my NumPy code was working correctly before applying it to the new question.

I was surprised to learn that the NumPy solution did not match the R answer. I didn't get an identity back when I substituted the NumPy solution back into the equation.

The U matricies from R and NumPy are the same shape (3x3) and the values are the same, but signs are different. Here is the U matrix I got from NumPy:

enter image description here

The D matricies are identical for R and NumPy. Here is D after the large diagonal element is zeroed out:

enter image description here

The V matrix I get from NumPy has shape 3x4; R gives me a 4x3 matrix. The values are similar, but the signs are different, as they were for U. Here is the V matrix I got from NumPy:

enter image description here

The R solution vector is:

x = [2.41176,-2.28235,2.15294,-3.47059] 

When I substitute this back into the original equation A*x = b I get the RHS vector from my R solution:

b = [-17.00000,28.00000,11.00000]

NumPy gives me this solution vector:

x = [2.55645,-2.27029,1.98412,-3.23182]

When I substitute the NumPy solution back into the original equation A*x = b I get this result:

b = [-15.93399,28.04088,12.10690]

Close, but not correct.

I repeated the experiment using NumPy np.linalg.pinv pseudo-inverse method. It agrees with the R solution.

Here is my complete Python script:

# https://stackoverflow.com/questions/75998775/python-vs-matlab-why-my-matrix-is-singular-in-python


import numpy as np

def pseudo_inverse_solver(A, b):
    A_inv = np.linalg.pinv(A)
    x = np.matmul(A_inv, b)
    error = np.matmul(A, x) - b
    return x, error, A_inv

def svd_solver(A, b):
    U, D, V = np.linalg.svd(A, full_matrices=False)
    D_diag = np.diag(np.diag(np.reciprocal(D)))
    D_zero = np.array(D_diag)
    D_zero[D_zero >= 1.0e15] = 0.0
    D_zero = np.diag(D_zero)
    A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), U)
    x = np.matmul(A_inv, b)
    error = np.matmul(A, x) - b
    return x, error, A_inv

if __name__ == '__main__':
    """
    Solution from my SO answer
    https://stackoverflow.com/questions/19763698/solving-non-square-linear-system-with-r/19767525#19767525
    Example showing how to use NumPy SVD
    https://stackoverflow.com/questions/24913232/using-numpy-np-linalg-svd-for-singular-value-decomposition
    """
    np.set_printoptions(20)
    A = np.array([
        [0.0, 1.0, -2.0, 3.0],
        [5.0, -3.0, 1.0, -2.0],
        [5.0, -2.0, -1.0, 1.0]
    ])
    b = np.array([-17.0, 28.0, 11.0]).T

    x_svd, error_svd, A_inv_svd = svd_solver(A, b)
    error_svd_L2 = np.linalg.norm(error_svd)
    x_pseudo, error_pseudo, A_inv_pseudo = pseudo_inverse_solver(A, b)
    error_pseudo_L2 = np.linalg.norm(error_pseudo)

Any advice on what I've missed with NumPy SVD? Did I make a mistake at this line?

    A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), U)

Update: Chrysophylaxs pointed out my error: I needed to transpose U:

    A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), np.transpose(U))

This change solves the problem. Thank you so much!


Solution

  • Thanks to Chrysophylaxs, here is the code that is now working correctly:

    # https://stackoverflow.com/questions/75998775/python-vs-matlab-why-my-matrix-is-singular-in-python
    
    
    import numpy as np
    
    def pseudo_inverse_solver(A, b):
        A_inv = np.linalg.pinv(A)
        x = np.matmul(A_inv, b)
        error = np.matmul(A, x) - b
        return x, error, A_inv
    
    def svd_solver_so(A, b, max_diag=1.0e15):
        """
        see https://stackoverflow.com/questions/24913232/using-numpy-np-linalg-svd-for-singular-value-decomposition
        see https://stackoverflow.com/questions/59292279/solving-linear-systems-of-equations-with-svd-decomposition
        :param A: Matrix in equation A*x = b
        :param b: RHS vector in equation A*x = b
        :param max_diag: max value of diagonal for setting to zero.
        :return: x solution, error vector
        """
        U, D, V = np.linalg.svd(A, full_matrices=False)
        D_diag = np.diag(np.diag(np.reciprocal(D)))
        D_zero = np.array(D_diag)
        D_zero[D_zero >= max_diag] = 0.0
        D_zero = np.diag(D_zero)
        A_inv = V.T @ D_zero @ U.T
        c = U.T @ b
        w = D_zero @ c
        x = V.T @ w
        error = np.matmul(A, x) - b
        return x, error, A_inv
    
    
    def svd_solver(A, b, max_diag=1.0e15):
        U, D, V = np.linalg.svd(A, full_matrices=False)
        D_diag = np.diag(np.diag(np.reciprocal(D)))
        D_zero = np.array(D_diag)
        D_zero[D_zero >= max_diag] = 0.0
        D_zero = np.diag(D_zero)
        A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), np.transpose(U))
        x = np.matmul(A_inv, b)
        error = np.matmul(A, x) - b
        return x, error, A_inv
    
    if __name__ == '__main__':
        """
        Solution from my SO answer
        https://stackoverflow.com/questions/19763698/solving-non-square-linear-system-with-r/19767525#19767525
        Example showing how to use NumPy SVD
        https://stackoverflow.com/questions/24913232/using-numpy-np-linalg-svd-for-singular-value-decomposition
        """
        np.set_printoptions(20)
        A = np.array([
            [0.0, 1.0, -2.0, 3.0],
            [5.0, -3.0, 1.0, -2.0],
            [5.0, -2.0, -1.0, 1.0]
        ])
        b = np.array([-17.0, 28.0, 11.0]).T
    
        x_svd, error_svd, A_inv_svd = svd_solver(A, b)
        error_svd_L2 = np.linalg.norm(error_svd)
        x_pseudo, error_pseudo, A_inv_pseudo = pseudo_inverse_solver(A, b)
        error_pseudo_L2 = np.linalg.norm(error_pseudo)
        x_so, error_so, A_inv_so = svd_solver_so(A, b)
        error_so_L2 = np.linalg.norm(error_so)