I'm trying to write a driver in C++ to calculate the eigenvalues for an asymmetric, real-valued sparse matrix using the fortran functions offered by ARPACK, but I am having a bit of trouble with the reverse communication approach.
Generally, I am trying to solve the normal eigenvalue equation:
A*v = lambda*v
and any interaction with the matrix A is done in ARPACK via a function 'av':
av(n, workd[ipntr[0]], workd[ipntr[1]])
which multiplies the vector held in the array 'workd' beginning at location 'ipntr[0]' and inserts the result into the array 'workd' beginning at location 'ipntr[1]'. Examples of this approach are given in the manual at http://www.caam.rice.edu/software/ARPACK/ and also in the ARPACK/EXAMPLES/SIMPLE/dnsimp.f code.
What I would like to know is how do I actually involve the matrix A? If it is not passed to the routine then how is it possible to find its action on the vector provided?
In the example code dnsimp.f their matrix A is calculated within the function 'av', and is 'derived from the standard central difference discretisation of the 2 dimensional convection-diffusion operator'. However, I believe this is problem specific? It also doesn't seem too useful to have to code the derivation of the matrix A into the function. I can't find much information on this from the manual either.
It doesn't seem to be too much of a problem, since as it is a user defined function I am able to just change the definition of 'av' to include the matrix A as a parameter. However I would like to know how it is done properly in case of any potential compatibility issues.
Thank you!
You don't have to supply the matrix to ARPACK.
All you have to do, is to multiply the matrix with the returned vectors (thus, reverse communication) till the desired convergence is reached.
For information on the algorithms, you should take a look at the users guide and especially on the chapter about the underlying algorithms.
Response to comment: The underlying algorithm is a form of Arnoldi Iteration. The basic algorithm is shown in wikipedia and shows, that the matrix A won't be accessed. Neither directly, nor indirectly.
In particular, the algorithms starts with an arbitrary normalized vector q_1. This vector is returned to the user. The user multiplies this vector with the matrix A using their favourite routine (usually some efficient sparse matrix-vector-multiplication) and returns the result to the Arnoldi Iteration to calculate a part of the Hessenberg matrix H (whose eigenvalues typically converge to the extreme eigenvalues of A) and the next vector q_2. This has to be iterated, till your results are converged.