I'm currently doing FEM calculations in java on very large square matrices with sizes up to 1M x 1M. These are very sparse though with under 10M entries. I'm using ojAlgo with the SparseStore matrix implementation and I'm really happy with it so far. The problem is, when I'm solving the linear equations system at the end using LU.R064.make().solve(A,b), with SparseStore A and b, the implementation of this solver automatically converts the sparse matrices into dense ones, leading to huge memory costs and runtimes. Is there a more efficient way to use ojAlgo or another library?
There are currently no sparse matrix decompositions in ojAlgo.
There are some iterative equation system solvers that work with sparse "equations". The selection is somewhat limited – Gauss–Seidel and Conjugate gradient. There are preconditions for when these can be used.
Have look at this interface and its implementations:
org.ojalgo.matrix.task.iterative.IterativeSolverTask.SparseDelegate
Maybe that SparseStore
can be replaced by a List<Equation>
? Equation
:s can be sparse, and you can feed that list directly to the solver.
Also note that in addition to SparseStore
there are also RowsSupplier
and ColumnsSupplier
that implement MatrixStore
. Their rows/columns can be wrapped (no copying) to create Equation
:s.