c++openmp

OpenMP behavior of for outside of parallel


I have a function that uses an OpenMP-parallelized for loop in its implementation. It should be possible to turn on/off the parallelization at runtime. Currently it is like this:

void iterate(bool parallelize) {
    #pragma omp parallel for if(parallelize)
    for(int i = 0; i < 1000; ++i) f(i);
}

From testing, it seems that putting the for outside of a parallel construct also disables the parallelization:

void iterate() {
    #pragma omp for
    for(int i = 0; i < 1000; ++i) f(i);
}

int main() {
    #pragma omp parallel
    {
        iterate(); // parallel
    }

    iterate(); // not parallel
}

Is this correct usage of OpenMP, or could it cause problems on other compilers/environments.

Edit: The question is about whether omp for can be legally placed outside of omp parallel. As an alternative way to conditional parallelization than the OpenMP if() clause.


Solution

  • Because of the implicit parallel region surrounding the whole program, it is fine to call iterate from main, i.e., outside any explicit OpenMP region. But, it would be an error to call the function from a task region as in:

    void iterate() {
        #pragma omp for
        for(int i = 0; i < 1000; ++i) f(i);
    }
    
    void foo() {
        #pragma omp task
        iterate()
    }
    

    To solve the issue, we can use the metadirective directive introduced with OpenMP 5.0, which allows you to adopt the directive to the calling context:

    void iterate() {
        #pragma omp metadirective when(construct={parallel}: for) otherwise{nothing}
        for(int i = 0; i < 1000; ++i) f(i);
    }
    
    void foo() {
        #pragma omp task
        iterate()
    }
    

    The metadirective can also be used with conditions that you would put into if clauses, but with the benefit of completely eliminating any overheads in the false case: https://github.com/OpenMP/Examples/blob/main/program_control/sources/metadirective.5.cpp