I am trying to calculate the eigenspectrum for a number of large matrices using Scalapack, but rather than distribute each matrix across all 32 processes. I'd rather distribute each matrix across 4 processes and calculate 8 matrices in parallel. I know how to subdivide an MPI Grid using MPI_Comm_split, but I it seems Scalapack doesn't take custom communicators. Instead it seems to use a BLACS grid rooted in PVM.
How can I implement this subdivision in Scalapack?
This is done by BLACS
and grid setup.
The reference functions are
The documentation for these routines states:
These routines take the available processes, and assign, or map, them into a BLACS process grid.
Each BLACS grid is contained in a context (its own message passing universe), so that it does not interfere with distributed operations which occur within other grids/contexts.
These grid creation routines may be called repeatedly in order to define additional contexts/grids.
Which means that you can create 8 different grids and pass each ICONTXT
to scalapack routines for each matrix.
Both of them get an IN/OUT argument
ICONTXT
(input/output) INTEGER
On input, an integer handle indicating the system context to be used in creating the BLACS context. The user may obtain a default system context via a call to BLACS_GET. On output, the integer handle to the created BLACS context.
You can use these contexts recursively on the same manner.