Just for learning purposes, I was looking at raytracing techniques. I recently discovered Global Illumation and its variants, reading at this amazing work by Kevin Beason. I did tried to port smallpt in another language (Lua).
I got something working fine, so far, but it renders the scene too slowly in my opinion. So far, I digged over the internet, and i've seen in lots of technical papers covering the subject that this was the main problem of global illumination techniques, especially path-tracing.
It seems to me there are methods to faster the process, such as spacing filling curves (Hilbert curves, for instance). Basically, they all divide the viewport into buckets (or tiles), and then command the render to process path-tracing on each single bucket in a specific order. Technically, I didn't look at how to implement Hilbert curves, but I would like to understand how, in practical terms, this can make the entire process faster ?
My first assumption was, considering each bucket appart, the renderer was called on specific pixels, and then the others were samples using interpolation tricks, but actually, it seems that the renderer is called on each single pixel on the bucket.
So, as a result, the renderer will process all the pixels of the map, which results in the same amount of work than two nested for loops (for each rows, each columns), in my humble opinion. So I do know i am obviously missing something, thus looking forwards some clean explanations.
Thanks in advance.
Smallpt is designed to demonstrate global illumination with path-tracing in the fewest lines of code possible. As such it does not feature any performance optimisations.
For anything more complex than simple scenes, most of the rendering time is spent calculating which object a ray hits next along its path, as this has to be done many times for each ray.
A naive implementation will need to, for every ray bounce, compute the intersection of the ray with every object comprising the scene, in order to determine which object is hit next.
To improve on this Acceleration Structures are used to reduce the number of ray/object intersections which have to be calculated, typically by partitioning up the volume enclosing the scene to avoid unnecessary intersection tests.
There are two main approaches:
Performance-wise both work equally well, however I believe BVH's are simpler to implement.