Basically I have a parallel loop that has a couple inherently serial variables that I pre-calculate. However I want to be able to set off a given iteration of the for loop once it's serial variables are calculated. Essentially I want something like this:
int num_completed = 0;
int serial_values[100];
#pragma omp parallel
{
if(omp_get_thread_num() == 0){
for(int i = 0; i < 100; i++){
serial_values[i] = i;
num_completed++;
}
}
#pragma omp for
for(int j = 0; j < 100; j++){
while(true){
if(j < num_completed)
break;
}
int serial = serial_values[j]
do parallel loop
}
}
This does work, but the for loop allocates work to thread 0, even though it's tied up doing the serial calculations. Mostly that means it is slower because thread 0 has to calculate its share of the parallel loop in addition to the serial variables.
Also I know the spin lock isn't great but I couldn't think of anything better off the top of my head, if you have suggestions I'd welcome those too.
I've already tried using #pragma omp single nowait
and it functions the same way. I've also tried #omp section
but each section is meant to be executed by a single thread, not in parallel.
You are already using a spin lock to make the threads in the parallel loop wait for the serial computation to proceed far enough. One way to move forward would be to leverage that, plus OpenMP scheduling parameters, to make the threads in the parallel loop perform the serial computation. That might look something like this:
int num_completed = 0;
int serial_values[100];
#pragma omp parallel for schedule(monotonic:static, 1)
for (int j = 0; j < 100; j++) {
int serial;
while (1) {
if (j == num_completed) {
// serial computation:
serial = j
serial_values[j] = serial;
num_completed += 1;
break;
}
}
// do parallel loop
}
The schedule(monotonic:static, 1)
is relatively important here, because it ensures that the iterations of the parallel loop are split among the threads in single-iteration chunks, and that each thread executes its assigned chunks in logical iteration order. You could also use schedule(monotonic:dynamic)
or, equivalently, schedule(monotonic:dynamic, 1)
. With chunks larger than one iteration, you could have threads delayed unnecessarily long (and that may be an issue for your original code). With chunks executed out of order, you would likely get deadlocks on those spin locks.
I've also tried
#omp section
but each section is meant to be executed by a single thread, not in parallel.
Yes, but you can put a nested parallel
region inside one of the sections. I think this would probably be inferior to the approach described above, but it could yield a structure more similar to your original code. Something like this, for example:
int num_completed = 0;
int serial_values[100];
#pragma omp parallel num_threads(2)
{
#pragma omp sections
{
#pragma omp section
{
for (int i = 0; i < 100; i++) {
serial_values[i] = i;
num_completed++;
}
}
#pragma omp section
{
#pragma omp parallel for schedule(monotonic:static, 1)
for (int j = 0; j < 100; j++) {
while (true) {
if (j < num_completed) break;
}
int serial = serial_values[j]
// do parallel loop ...
}
}
}
}
You can put a num_threads()
clause on that parallel for
loop too, if you like.