Does there exist an implementation of fortunes eigensolve algorithm?
At the moment i am trying to port it to python but it does not deliver correct results so far. The only thing I found was the paper where he describes the algorithm here and I don't understand where my algorithm goes wrong.
import numpy as np
from sympy import Poly, symbols
lamda = symbols('lamda')
#Creates generalized companion matrix
def gen_companion(p,S):
#create lagrange coeffs
L_i = []
for i in range(len(S)):
den_s = 1
for j in range(len(S)):
if (not (i == j)):
den_s *= p(S[i]-S[j])
p_s = p(S[i])/den_s
L_i.append(p_s)
#create lagrange matrix L
dim = S.shape[0]
L = np.zeros((dim,dim),np.complex64)
for i in range(dim):
for j in range(dim):
L[j,i] = np.abs(L_i[i])
B = np.zeros((dim,dim),np.complex64)
#Create Matrix S
for i in range(dim):
B[i,i] = S[i]
return B-L
def eigensolve(p,n_iterations):
A = np.polynomial.polynomial.polycompanion(coeffs).astype(np.complex64)
S = qr_algorithm(A,n_iterations)
for _ in range(n_iterations):
L = gen_companion(p,S)
S = qr_algorithm(L,n_iterations)
return S[0]
def qr_algorithm(A,n_iterations):
for _ in range(n_iterations):
Q,R = np.linalg.qr(A)
A = R @ Q
return np.array([A[i,i] for i in range(A.shape[0])])
p = Poly((lamda - 10)*(lamda-20)*(lamda-4)*(lamda-5),lamda)
coeffs = np.array(list(reversed(p.all_coeffs())))
n_iterations = 10
v = eigensolve(p,n_iterations)
print("")
I only got the same eigenvalues I entered in the second part of the algorithm:
(20.153511+0j)
(20.153511+0j)
(20.153511+0j)
(20.153511+0j)
(20.153511+0j)
The formula in the paper is
l[i] = P(s[i]) / Q_S'(s[i]) = P(s[i]) / prod( (s[i]-s[j]) for i!=j)
This can be implemented as
Q = np.poly(S)
dQ = np.polyder(Q)
l = np.polyval(P,S)/np.polyval(dQ,S)
Note that the vertical lines in the paper are wrong, or at least should not denote the absolute value.
As I noted in the comments, this variant of the QR algorithm is a pre-alpha version. Even the first variant by Francis recommended for implementation used shifts to accelerate the convergence of the smallest eigenvalues and thus the split of the matrix, deflated to the subspace orthogonal to the smallest eigenspace.
If one does not overly care about computation time, one can just use this primitive variant. It will converge to a normal form that allows to read off the eigenvalues. This normal form will have the eigenvalue in order of descending absolute values. Especially in the later stages of the root finder the companion matrix will already be close to diagonal. If the diagonal entries are not in the correct order, the QR algorithm will exchange them using small rotations, thus a large number of rotations. This overhead does not contribute to the precision of the roots/eigenvalues, and should thus be avoided. Which your implementation does automatically.
I'd like to direct your attention to the np.diag
function and lists with constant elements. The companion matrix can then be constructed as
np.diag(S) - np.array(len(S)*[l])
Making these changes and returning always the full results gives as return value of eigensolve
the expected
[20. +0.j 10.000002+0.j 5.000003+0.j 4.000001+0.j]