c++parallel-processingmpimessage-passing

Which sections of MPI code are copied and which are shared?


Consider the following piece of code:

#include <mpi.h>

// Section 1

int main()
{
        // Section 2

        MPI_Init(NULL, NULL);
        int world_size = -1; 
        MPI_Comm_size(MPI_COMM_WORLD, &world_size);
        int rank = -1; 
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);

        // Section 3

        MPI_Finalize();

        // Section 4

        return 0;                                                                                                                                                                                              
}

When I compile and run this piece of code, which of the four marked sections would be shared across all workers and which sections would be copied? Moreover, at which line would the workers be spawned?

I want to know this because I have a data structure which gets different values across multiple executions and I want to give all workers read access to this structure. If a certain section of this code is copied and the workers are spawned after that, then I can declare this structure in that section. I know that I can also use the message passing calls for this but the elements of this structure are objects of a custom class which means that I cannot use the standard MPI APIs for this form of communication. What would be the best way to implement this?


Solution

  • You're thinking about it the wrong way. MPI uses processes, so there are no workers to spawn: if you run 20-way parallel, then 20 processes are started and they all execute your whole program from the first to the last line.

    Also: "I want to give all workers read access to this structure" That is not possible. Because they are processes, the "workers" can only have a complete copy of the data structure. But there is no shared memory that you "give access" to.

    In fact, the right way to write MPI is to give each process a unique subset of the data. That's why it's called "distributed memory".