pcatransposesymmetry

Symmetry of autocovariance matrix by multiplying feature matrix with its transpose


There is a mathematical theorem stating that a matrix A multiplied with its transpose yields a symmetric, positive definite matrix (thus leading to positive eigenvalues). Why does the symmetry test fails here for medium-size-random matrices? It always works for small matrices (20,20 etc.)

import numpy as np
features = np.random.random((50,70))
autocovar = np.dot(np.transpose(features),features)
print((np.transpose(autocovar) == autocovar).all())

I always get 'FALSE' running this code. What do I do wrong? I need the autocovariance matrix to perform a PCA but so far I get complex eigenvalues...

Thanks!


Solution

  • This could be due to errors in floating point arithmetic. Your matrix may be very close to a symmetric matrix numerically, but due to errors in finite precision arithmetic it is technically nonsymmetric. As a result a numerical solver may return complex eigenvalues.

    One solution (sort of a hack) is to symmetrize the matrix, i.e., replace it with its symmetric part. This matrix is guaranteed to be symmetric, even in floating point arithmetic, and it will be very close to the matrix you define (near machine precision). This can be achieved via

    autocovar_sym = .5*(autocovar+autocovar.T)
    

    Hope this helps.