in Principle Component Analysis
I was wondering why the data projected onto the principle component has variance by the eigenvalue corresponding to the principle eigenvector?
I can't find the explanation in my textbook.
In Principal Components Analysis (PCA), you are calculating a rotation of the original coordinate system such that all non-diagonal elements of the new covariance matrix become zero (i.e., the new coordinates are uncorrelated). The eigenvectors define the directions of the new coordinate axes and the eigenvalues correspond to the diagonal elements of the new covariance matrix (the variance along the new axes). So the eigenvalues, by definition, define the variance along the corresponding eigenvectors.
Note that if you were to multiply all your original data values by some constant (with value greater than one), that would have the effect of increasing the variance (and covariance) of the data. If you then perform PCA on the modified data, the eigenvectors you compute would be the same (you still need to same rotation to uncorrelate your coordinates) but the eigenvalues would increase because the variance of the data along the new coordinate axes will have increased.