openclgpgpumultiple-gpu

Any new ideas on using openCL with multiple GPUs?


My question is :

Has there been any new advancement (or perhaps a tool/library developed) for using openCL with multiple GPUs? I understand that if someone wants to write a code in openCL with the goal of using multiple GPUs, then he can, but I have been told that the way you can arrange the communications between them is a little "primitive". What I want to know is if there is something out there that can put a level of abstraction between the programmer and all that arrangement of communications between the GPUs.

I am working at stochastic simulations with pretty big lattices and I would like to be able to break them into different GPUs, each of which can do the computing and communicate if necessary. Writing this in a way that it's efficient is difficult enough, so if I can avoid all the low level work of using the standard way to do it through openCL, it would be a big help.

Thanks!


Solution

  • On the academic side, there is this paper from Seoul National University in South Korea:

    Achieving a single compute device image in OpenCL for multiple GPUs, http://dl.acm.org/citation.cfm?id=1941591

    The authors propose an automatic mechanism for dividing a kernel across multiple GPUs. Unfortunately, their framework has not been released yet.