pythonnumpyscipylinear-algebramatrix-decomposition

SciPy generalized eigenvalues: eig and eigh yield different results


Using scipy, I want to compute a generalized eigenvalue problem (see this link).

In my case, matrix A is symmetric and real, albeit not positive definite (it doesnt need to be afaik). Matrix B is real, symmetric and positive definite. Thus, both scipy algorithms eig and eigh should work and I expected them to yield identical results.

But this was not the case. To reproduce, consider these trial matrices:

A = [[-0.19031723,-0.40125581],[-0.40125581,-0.19031723]]
B = [[1.0,0.38703254],[0.38703254,1.0]]

>>> scipy.linalg.eig(A,B)
# Eigenvalues:
[-0.42650264+0.j,  0.34412688+0.j]
# Eigenvectors:
[[-0.70710678, -0.70710678],[-0.70710678,  0.70710678]]

>>> scipy.linalg.eigh(A,B)
# Eigenvalues:
[-0.42650264,  0.34412688]
# Eigenvectors:
[[-0.60040137,  0.90316332],[-0.60040137, -0.90316332]]

This occurs not only on my computer but is reproducible on different machines.

I am confused, why are the eigenvectors in both algorithms not identical? Do I need to be concerned?


Code to reproduce (for example at https://www.katacoda.com/courses/python/playground):

import scipy.linalg as la
A = [[-0.19031723,-0.40125581],[-0.40125581,-0.19031723]]
B = [[1.0,0.38703254],[0.38703254,1.0]]

print("Result of scipy.linalg.eig(A,B)")
print(la.eig(A,B))
print("------------------")
print("Result of scipy.linalg.eigh(A,B)")
print(la.eigh(A,B))

Solution

  • eigh is only for symmetric matrices and thus uses a faster (and different) algorithm. This is why it produces different results. There are an infinite number of eigenvectors for any given eigenvalue, so I don't think you need to be concerned.

    I've never used these methods and am just going off of my linear algebra knowledge and what I found about eigh and eig online, so please correct me if I'm wrong.