I'm currently working on a ray tracing project using Python and have encountered performance issues with rendering the entire scene each time. I want to implement a more efficient rendering approach similar to how Unreal Engine handles it.
Specifically, I'm looking for guidance on implementing the following optimizations:
Frustum Culling: I want to avoid rendering objects that are outside the camera's frustum. What is the best way to implement frustum culling in my ray tracing code?
Dynamic Resolution Scaling: I'm interested in rendering each object at a specific resolution based on its distance from the camera. How can I implement dynamic resolution scaling to optimize rendering performance?
I've found a ray tracing code on GitHub, and while it provides a solid foundation, I'm struggling to integrate these optimization techniques into my existing code. Could someone provide guidance or code snippets for achieving these optimizations in a ray tracing context?
To 1:
Frustum culling does not make sense in ray tracing. Frustum culling makes sense for rasterization.
Rasterization is a top down approach: For instance you want to render cube. In rasterization you say in a top down approach - to render a cube I just must render it faces. To render a face (quad) of a cube you you just need to render 2 triangles. To render a triangle you must just project it vertices via a projection matrix and then do some clipping. After projection and clipping to render a 2D triangle you must draw its fragments. To draw a fragment you need a Z-Buffer and a Z-Buffer test, maybe some alpha blending. You do rendering in rasterization from top (a cube) down to the fragment/pixel level. When it comes to frustum culling you simply say if the Cube (its 8 vertices) are not within the view frustum I can skip the whole top down approach if projection individual triangles, clipping, etc. Usually you use bounding boxes for more complicated objects to fast reject object outside of the view frustum.
Ray tracing is bottom up. You start at the pixel level - you ask one pixel where does your color contribution come from. You trace a ray from the pixel in the scene. Usually, you have a bounding volume hierarchy (BVH). You find out that the ray hits the bounding box of your scene. You go down the hierarchy and find out the ray intersects a triangle. You find out that the triangle belongs to an object (e.g. a cube) which has some specific BRDF. You sample the BRDF and get the color contribution. Frustum culling does not make sense here since rays can bounce around the whole scene (go to a reflecting cube - reflecting ray reflects an object outside of your frustum).
I think what you mean is not Frustum Culling, but BVH. There are different ways to implement BVHs. For instance you can use Octrees: https://book.vertexwahn.de/docs/rendering/octree/
To 2:
Comes also from rasterization domain. A translation to ray tracing would be to sample objects that far away less than object close to the viewer