I'd like to port an old C code to the c++ library Eigen. But I am struggling to find the same functionality in Eigen. Could someone point me to the right direction?
Here is the C code:
#include <stdio.h>
void print_matrix(double *X, int m, int n)
{
int i,j;
for (i = 0; i < m; i++)
{
for (j = 0; j < n; j++)
{
printf(" %8.2f ", *X++);
}
printf("\r\n");
}
}
void main()
{
double *E, *SSS, *eigen_real, *eigen_imag;
E = (double*)malloc(6 * 6 * sizeof(double)); // 6 x 6 matrix
E[0] = 62.0918; E[1] = 92.0738; E[2] = 141.992; E[3] = 220.976; E[4] = 182.942; E[5] = 153.563;
E[6] = 55.3852; E[7] = 80.0744; E[8] = 124.01; E[9] = 191.613; E[10] = 157.989; E[11] = 134.38;
E[12] = 51.1571; E[13] = 69.8631; E[14] = 84.3026; E[15] = 156.564; E[16] = 142.401; E[17] = 122.357;
E[18] = -31.8004; E[19] = -46.4817; E[20] = -64.0063; E[21] = -108.415; E[22] = -94.0264; E[23] = -78.8136;
E[24] = -29.362; E[25] = -41.8382; E[26] = -49.6327; E[27] = -94.0264; E[28] = -87.0707; E[29] = -72.8974;
E[30] = -25.7936; E[31] = -35.385; E[32] = -41.3963; E[33] = -78.8136; E[34] = -72.8974; E[35] = -62.2423;
SSS = (double*)malloc(6 * 6 * sizeof(double)); // 6 x 6 matrix
Hessenberg_Form_Elementary(E, SSS, 6);
print_matrix(E, 6, 6);
print_matrix(SSS, 6, 6);
eigen_real = (double*)malloc(6 * sizeof(double)); // 6 x 1 vector
eigen_imag = (double*)malloc(6 * sizeof(double)); // 6 x 1 vector
QR_Hessenberg_Matrix(E, SSS, eigen_real, eigen_imag, 6, 100);
print_matrix(E, 6, 6);
print_matrix(SSS, 6, 6);
print_matrix(eigen_real, 1, 6);
print_matrix(eigen_imag, 1, 6);
fflush(stdout);
// Find bigges eigen real value and it's index.
// Use previous index and get that column from vector SSS;
free(E);
free(SSS);
free(eigen_real);
free(eigen_imag);
}
Output:
// I kinda cheated and used std::cout << double_val; in the print_matrix() method.
// E after Hessenberg_Form_Elementary (so an Hessenberg formed matrix?):
62.0918 -72.1522 -188.995 405.323 179.095 153.563
55.3852 -61.7402 -163.226 351.093 154.964 134.38
0 -17.6131 -17.4618 -21.3177 -19.9402 -1.76394
0 0 2.78347 -13.806 -7.48317 -3.02733
0 0 0 0.0658162 -0.343362 0.145991
0 0 0 0 -6.92092e-12 2.79907e-12
// SSS after Hessenberg_Form_Elementary:
1 0 0 0 0 0
0 1 0 0 0 0
0 0.92366 1 0 0 0
0 -0.574167 -0.439983 0.851948 1 0
0 -0.530142 -0.776866 1 0 0
0 -0.465713 -0.596762 0.222196 -0.272727 1
// E after QR_Hessenberg_Matrix:
1 2.15944 2.67852 0.106495 0.809046 -0.532604
2.66454e-15 1 2.07824 0.678823 0.872037 -2.55826
2.69953e-07 1.09271e-14 1 0.397866 -0.309173 0.931392
6.11079e-23 1.67486e-29 8.31713e-22 1 -0.355554 -1.0731
0 1.25949e-17 -5.60169e-16 -3.09331e-21 -0.0196556 -1.82559
0 0 4.06744e-28 -1.19446e-31 1 0
// SSS after QR_Hessenberg_Matrix:
-0.748886 -1.71875 -2.59538 -0.0284217 -0.620797 -0.81792
-0.620564 -1.57889 -1.71274 -0.354634 -0.799514 2.61718
-0.805355 -0.993972 -0.122444 0.287721 -0.0999392 0.327147
0.469555 0.722225 1.04439 -0.413348 0.299818 -0.981442
0.522391 0.499704 0.607027 0.321252 -0.360368 -0.245386
0.430454 0.463379 0.113466 0.170249 0.820676 0.163626
test eigen_real:
-63.3442 41.345 -8.8706 -0.389763 3.68802e-13 3.68802e-13
test eigen_imag:
0 0 0 0 5.16661e-13 -5.16661e-13
The function Hessenberg_Form_Elementary can be found at: http://www.mymathlib.com/c_source/matrices/eigen/hessenberg_elementary.c (Not enough space to copy them here.)
The function QR_Hessenberg_Matrix can be found at: http://www.mymathlib.com/c_source/matrices/eigen/qr_hessenberg_matrix.c (Not enough space to copy them here.)
So far what I've done is:
Eigen::Matrix<double, 6, 6> E
{
{62.0918, 92.0738, 141.992, 220.976, 182.942, 153.563},
{55.3852, 80.0744, 124.01, 191.613, 157.989, 134.38},
{51.1571, 69.8631, 84.3026, 156.564, 142.401, 122.357},
{-31.8004, -46.4817, -64.0063, -108.415, -94.0264, -78.8136},
{-29.362, -41.8382, -49.6327, -94.0264, -87.0707, -72.8974},
{-25.7936, -35.385, -41.3963, -78.8136, -72.8974, -62.2423}
};
Eigen::Matrix<std::complex<double>, 6, 1> eigenVal = h.eigenvalues();
Eigen::Matrix<double, 1, 6> eigenReal = eigenVal.real();
std::cout << "eigen_real:\r\n" << eigenReal << "\r\n";
// Only the real values are being used.
// These values are super close to the one the C library gives.
// So this is solved.
Eigen::Index maxIndex;
double maxValue = eigenReal.maxCoeff(&maxIndex);
// TODO: missing SSS matrix to use maxIndex against to get the column.
// As for the E (aka Hessenberg form) and SSS I don't understand what to do. Nothing gives the same result. I tried:
Eigen::HessenbergDecomposition<Eigen::Matrix<double, 6, 6>> hd;
Eigen::Matrix<double, 6, 6> h = hd.compute(E).matrixH();
// But this gives a different result, some values are the same tho, so I am close but not there:
// 62.0918 44.0678 150.14 -292.116 -11.3713 156.761
// -90.6823 -74.0006 -221.494 387.023 20.8076 -211.982
// 0 -9.94278 -6.83954 -22.6277 3.54965 10.6973
// 0 0 3.02513 -12.1272 0.392628 4.51609
// 0 0 0 -0.0512751 -0.384069 -0.0061069
// 0 0 0 0 -1.07839e-11 -1.47677e-12
For easier and faster reproduction I also tried to use python with numpy:
import numpy as np
from scipy.linalg import hessenberg
np.set_printoptions(precision=3, suppress=True)
E = np.array([
[62.0918, 92.0738, 141.992, 220.976, 182.942, 153.563]
[55.3852, 80.0744, 124.01, 191.613, 157.989, 134.38]
[51.1571, 69.8631, 84.3026, 156.564, 142.401, 122.357]
[-31.8004, -46.4817, -64.0063, -108.415, -94.0264, -78.8136]
[-29.362, -41.8382, -49.6327, -94.0264, -87.0707, -72.8974]
[-25.7936, -35.385, -41.3963, -78.8136, -72.8974, -62.2423]
])
H, Q = hessenberg(E, calc_q=True)
print(H)
#[[ 62.092 44.068 150.14 -292.114 -12.08 156.711]
# [ -90.682 -74. -221.494 387.021 21.765 -211.889]
# [ 0. -9.943 -6.84 -22.627 3.501 10.714]
# [ 0. 0. 3.025 -12.127 0.372 4.518]
# [ 0. 0. 0. -0.052 -0.384 -0.007]
# [ 0. 0. 0. 0. -0.002 -0. ]]
# This gives similar, but yet again a different answer for H...
Should I just stick to the C library for this operation and somehow iterate over my matrix, copy the elements into a vector, pass it over to the C library, then copy back the result into an Eigen matrix? That seems to be the fastest solution. But I would like to understand why I can't (or how to) get the same results as the C library.
I think there's a misunderstanding here what the Hessenberg decomposition does. It's an algorithm that can change your vector space representation so that the matrix is now given by a Hessenberg matrix (i.e. a matrix that is upper right triangular + one subdiagonal more), and that the transformation between both representations of the vector space are orthogonal.
This means that the original matrix A
can be represented as A == Q * H * Q.transpose()
after the Hessenberg decomposition. And if you do the following in your code:
Eigen::HessenbergDecomposition<Eigen::Matrix<double, 6, 6>> hd;
hd.compute(E);
Eigen::Matrix<double, 6, 6> h = hd.matrixH();
Eigen::Matrix<double, 6, 6> Q = hd.matrixQ();
Eigen::Matrix<double, 6, 6> E_after_decomp = Q * h * Q.transpose();
You will see that the matrix E_after_decomp
is identical to the original matrix E
(up to numerical precision). And the matrix h
in your case is definitely of a Hessenberg form. This means that Eigen has correctly performed a Hessenberg decomposition in your case.
(Note also that if you read the documentation of the original C library, the SSS matrix is nothing other than the Q matrix that Eigen produces, the Hessenberg_Form_Elementary
is another Hessenberg decomposition algorithm that provides both matrices -- the C library just replaces the original matrix object, while Eigen returns it separately.)
The issue you are running into is that there are multiple ways to perform this decomposition. While most implementations use the same basic algorithm, there are often different preconditioning steps used to avoid numerical instabilities. And different libraries will use different choices for this, and there's typically not one "right" choice -- there are upsides and downsides to each different way of doing this.
However, I think you are barking up the wrong tree here and not understanding what the high-level intent of the code is. If you look at the next step of the original code, QR_Hessenberg_Matrix
will perform the QR algorithm on an input Hessenberg matrix to calculate the eigenvalues and eigenvectors of that matrix. Since the Hessenberg matrix in question is only an orthogonal transformation away from the original matrix, the eigenvalues are going to be the same as those of the original matrix, and the eigenvectors of the Hessenberg matrix can be transformed via the Q matrix into the eigenvectors of the original matrix.
Applying the QR algorithm to a Hessenberg form of a matrix is a very standard way of computing complex eigenvalues of a non-symmetrical matrix in linear algebra.
I therefore suspect that the original code just used the Hessenberg decomposition + QR algorithm to calculate the eigenvalues, and that the Hessenberg H/Q matrices were just intermediates to achieve that, possibly because there wasn't a single routine in your original library that immediately calculated the corresponding eigenvalues.
Do you actually need the Hessenberg H and Q matrices in the code later on? If so, for what? Because while the eigenvalues and eigenvectors of a matrix are uniquely defined, many decompositions in linear algebra are not, i.e. there is no one correct solution for such a decomposition, there are multiple different ways of doing this, and I can't think of a useful algorithm that uses such a decomposition for anything other than just an intermediate step to get uniquely defined quantities. (Such as eigenvalues and eigenvectors.) Even just different versions of the same library can yield different results for such a decomposition, because improvements were made to the algorithms between versions that caused the algorithm to select a different orthogonal transformation.
I therefore think that what you really want is to use the EigenSolver class template in Eigen to get the eigenvectors and eigenvalues of the original matrix, and forget about all the intermediate steps in the original code, because that was just an artifact of how that code was written.
In case your wondering what's going on with the SSS
matrix in the original code at the end: if you read the comment in the original routine, it will tell you that it returns the eigenvectors of the original matrix (already pre-multiplied with the Q matrix), but it will return it in a very weird representation, since the SSS
matrix is real (just a double
), while the result of the decomposition will be complex:
On output, the i-th column of S corresponds to the i-th eigenvalue if that eigenvalue is real and the i-th and (i+1)-st columns of S correspond to the i-th eigenvector with the real part in the i-th column and positive imaginary part in the (i+1)-st column if that eigenvalue is complex with positive imaginary part. The eigenvector corresponding to the eigenvalue with negative imaginary part is the complex conjugate of the eigenvector corresponding to the complex conjugate of the eigenvalue.
Eigen can give you the eigenvectors already in complex form (EigenSolver::eigenvectors()
returns a complex vector), so you don't need to bother with trying to figure out which columns to use of the SSS
matrix to piece together the correct eigenvectors, as you had to do with the original code.
That all said, if you still want to interface between Eigen and plain memory, and you do want to use the original library anyway, you can take a look at the Map class template in Eigen. Just be careful with not accidentally mapping the wrong memory with that.