mutexvulkan

How to realize random access to the same GPU based resource in Vulkan (basically: how to create a "mutex")?


I am playing with calculating pictures of fractals using a GPU and Vulkan and "utilizing the power of compute shaders". There I ran into one problem and could not really find a satisfying answer: The program calculates fractals (all data for that kept on the GPU) and displays them. In essence there exist two functions:

Both methods are triggered on the CPU but then run until completion on the GPU (as one would probably expect anyway). To "stay compliant to standard windows behaviour" I tied the Draw() also to the actual painting method of the output window, thus allowing to show whatever was calculated so far also on window resizes, moves, obscuring the window etc.

Both Calculate() and Draw() require access to the same data. I wanted to allow Calculate() and Draw() to appear in random order (which is even somewhat required here as the window redraw can kick in "at any time") so I need to avoid data races between Calculate() and Draw() when accessing that data.

For clarification: Calculate() does only a stepwise calculation and not the whole thing (as this would take too long). So I wanted an arbitrary sequence of Calculate() and Draw() for an intermediary display update to work.

One solution would be to do that synchronisation with fences on CPU side but that felt unnatural to me as it would bring unneeded stalls into the flow. So I tried to solve it "on the GPU side" via semaphores.

But that turned out to be (at least for me) surprisingly difficult: Wading through the Internet I found no good way to utilise binary or timeline semaphores to fit this use case.

The core issue: Anything I found was always taking the assumption of having a predefined order of schedules, like having one Calculate(), then one Draw() etc. But with the requirement "both calls can come in a random order" I found no clear way how I could easily and sucessfully protect the data race when accessing that data from both calls.

Eventually I figured out that what works is to use the same binary semaphore both as a wait and a signal semaphore, basically making the submitted command both wait for the semaphore to be signalled before accessing data (and unsignalling it by this) and then setting it to signaled again once finished:

vk::PipelineStageFlags W = vk::PipelineStageFlagBits::eTopOfPipe;
vk::SubmitInfo SubmitInfo;
SubmitInfo
   .setWaitDstStageMask(W)
   .setWaitSemaphores(*Fractal.Output.Mutex)
   .setSignalSemaphores(*Fractal.Output.Mutex)
   .setCommandBuffers(C);

vk().Queue.submit(SubmitInfo,{});

This has the somewhat very "unnatural" need that I must bring the used semaphore into a signalled state before using it for Calculate() or Draw().

Overall I feel like I am using it "exactly the other way around than usual".

Also I found nothing contradicting that it is allowed to use the same semaphore both as wait condition and as signal semaphore but I also found nowhere explicitly stated that this is allowed.

As far as I could test this solution seems to work. But as I would think that such a use case as I have here would be not too uncommon I still have a quite awkward feeling as my solution looks "overly complicated" and more like a hack, which might even not work everywhere.

Does anyone know a better way to do this?


Solution

  • I think this can be solved quite well with a singular timeline semaphore:

    *You didn't say anything about synchronizing Calculate commands with each other, but you can use the same timeline value for that - instead of having them wait for the value of C, just have them wait for the value of A, ensuring that all previous commands are done (and/or batch them in a single submission and use pipeline barriers if possible).