I have a quick theoretical question. Can Intel oneTBB (Thread Building Blocks) co-exist with OpenMP (Open MultiProcessing) in one code base? Both are parallel runtimes for shared memory architectures. I have a parallel code with oneTBB. I introduced OpenMP parallel for
pragmas into it. The code compiles and runs, but I see no performance benefit from using OpenMP. I'm starting to think these runtimes are mutually exclusive.
Update: Please note, I don't want to choose one of the runtimes as in the question suggested (C++ Parallelization Libraries: OpenMP vs. Thread Building Blocks). I want to use both of them simultaneously. To do that I would like to find out, if it is at all possible by design or these libraries are mutually exclusive.
If the phase between TBB and the OpenMP API are alternating, then a pattern like this might turn out to be the solution (from the OpenMP side):
void omp_parallel() {
// do work with OpenMP directives
}
void tbb_parallel() {
// do work with TBB templates
}
void alternating_omp_and_tbb() {
for (int i = 1; i < 10; i++) {
omp_parallel();
omp_pause_resource_all(omp_pause_soft);
tbb_parallel();
}
}
Please have a look at omp_pause_resource
and omp_pause_resource_all
for what's possible from the OpenMP side of things.
Note, that if you do this, the first encounter of OpenMP directives will have overheads, as the OpenMP runtime will have to spin up threads and initialize them again.
I'm not a big TBB user anymore, but TBB also seems to have such a shutdown method. See https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Initializing_and_Terminating_the_Library.html.