openglraycastingvertex-array

Is it ok to render Raycasting using vertex array?


Im making simple fps game using raycasting, because I thought its very fast and light method. My Raycasting function saves data in to a vertex array, which is then rendered by OpenGl. But because this array contains vertex for every pixel on the screen, it means that every frame Im rendering 2073600 verticies (1920x1080), which in my opinion isnt exactly great. So at this point I think that it would be better idea to ditch the whole raycasting thing and just make it just seem like raycaster but with real 3d, which would mean rendering only something about 20 verticies per frame.

So, what should I do? Should I use raycasting just with differrent rendering method? Should I use a real 3d? Or is 2073600 verticies per frame just fine?


Solution

  • Fastest would be to ray cast directly on GPU you can use compute shaders for this (I do not have any experience with those) however Its possible to do this also on standard shader rendering pipeline see my GLSL version:

    of coarse you need to add BVH or Octree to have reasonable speeds for complex scenes...

    If you insist on ray casting on CPU side then it would be much better to store your output into 2 textures one holding depth and the other RGB color. If you have access to 4 component textures with enough precision you can use RGBD format and single texture. Then to render you just render single QUAD and fragment shader do the rest ...