I want to use Numpy to calculate eigenvalues and eigenvectors. Here is my code:
import numpy as np
from numpy import linalg as LA
lapl = np.array(
[[ 2, -1, -1, 0, 0, 0],
[-1, 3, 0, -1, 0, -1],
[-1, 0, 2, -1, 0, 0],
[ 0, -1, -1, 3, -1, 0],
[ 0, 0, 0, -1, 2, -1],
[ 0, -1, 0, 0, -1, 2]])
w, v = LA.eigh(lapl)
print ('Eigenvalues:', np.round(w,0))
print ('Eigenvectors:', np.round(v,2))
Here is the result:
Eigenvalues: [ 0. 1. 2. 3. 3. 5.]
Eigenvectors: [[ 0.41 0.5 0.41 -0.46 0.34 0.29]
[ 0.41 0. 0.41 0.53 0.23 -0.58]
[ 0.41 0.5 -0.41 -0.07 -0.57 -0.29]
[ 0.41 0. -0.41 0.53 0.23 0.58]
[ 0.41 -0.5 -0.41 -0.46 0.34 -0.29]
[ 0.41 -0.5 0.41 -0.07 -0.57 0.29]]
However, when I run the same matrix in Wolfram Alpha, I am getting a different result - eigenvalues are the same, but the eigenvectors are different:
v1 = ( 1, -2, -1, 2, -1, 1)
v2 = ( 0, -1, 1, -1, 0, 1)
v3 = ( 1, -1, 0, -1, 1, 0)
v4 = ( 1, 1, -1, -1, -1, 1)
v5 = (-1, 0, -1, 0, 1, 1)
v6 = ( 1, 1, 1, 1, 1, 1)
Here is the link to the Wolfram Alpha calculations: http://bit.ly/1wC9EHJ
Why am I getting a different result? What should I do in Python to get the same result as produced by Alpha?
Thanks you very much for the help!
The results are different due to multiple reasons:
You probably noticed, that the numpy matrix v
contains the eigenvectors as horizontally stacked columns, while you're printing the Wolfram results v1
to v6
as rows.
The scale (or length) of an eigenvector is undefined. So it's usually scaled to length 1. The Wolfram result is scaled differently, which causes some confusion, I guess.
Note that by scaling the vectors, even the sign can change. That's why positive and negative elements might get flipped.
And last but not least: The order of eigenvectors is undefined, as long as they are in the same order like their corresponding eigenvalues. So you'd need to look at Wolfram's eigenvalues as well and possibly reorder v1
to v6
accordingly. (It's a common convention to sort by size of the eigenvalues. Wolfram seems to sort in descending order, while numpy sorts in ascending order.)
In case of matrices with non-unique eigenvalues, the corresponding eigenvectors can arbitrarily rotate, as long as they span the corresponding subspace. However, this doesn't seem to be the case in your example.
Considering the first 4 issues the results are actually pretty close. The singularity shouldn't matter, since there is only one zero eigenvalue, thus the corresponding eigenvector is unique (up to sign and length).