I want to implement some functionality like what is described in Introducing Low-Level GPU Virtual Memory Management (Nvidia Developer blog) on OpenCL. Specifically, I want to be able to reserve a large virtual address range and only map small portions of it at a time. I looked into the OpenCL SVM capabilities and am not sure if it is able to replicate this functionality.
I have existing code in CUDA that works but I believe that I need to port it over to OpenCL in order to utilize MGPUSim for accurate multi-device simulation.
tl;dr: No, there is not.
If you look at either
you will not find any such functionality, AFAICT. In OpenCL, interaction with memory is via buffers, not pointers/addresses - except that the kernel code eventually gets a pointer. To the OpenCL APIs on the host, buffers are both opaque, and somewhat complex - a single buffer may actually involve one copy on the host and one on the device.
Much of CUDA's capabilities, accumulated over the years, have not made it to OpenCL standardization for buffers. Moreover, NVIDIA has never chosen to ensure CUDA-OpenCL interoperability, so that you can't "wrap" a mapped/allocated CUDA address range with an OpenCL buffer, nor can you "unwrap" an OpenCL buffer to get an address you can work with using CUDA. NVIDIA has also not chosen to implement OpenCL extensions - as it well could - to allow the use of a more full set of its GPU features with OpenCL.
This state of affairs is quite problematic. I personally find that NVIDIA has been hostile and unfaithful to OpenCL as a project and to the Khronos consortium, in which it is a founding member - with this behavior and other actions and failures-to-act. And while other consortium members have not exhibited full fidelity either - NVIDIA has invested the most in its GPUGPU ecosystem, so it bears extra responsibility for the lack of CUDA and OpenCL interoperability.
Final note: This answer regards OpenCL 3.0 and earlier. Future OpenCL revisions could always, in theory, alter/expand the memory model; or NVIDIA could implement some extensions or CUDA-to-OpenCL interop API. So, if you're reading this after 2023, better double-check.