I am learning programmable rendering pipeline by implementing a tiny software renderer. I try to implement it in a 'hardware' style. However, I am not familiar with the GPU pipeline and got some problems of homogeneous clipping.
According to this thread, suppose we have two points e0
, e1
in 3D eye coordinate, which are projected to h0(-70, -70, 118, 120)
, h1(-32, -99, -13, -11)
in the 4D homogeneous clipping space. Then we do interpolation in 4D homogeneous space, the segment h0-h1
is clipped by plane w = -x
(z = -1
in NDC) at 4D point h(t)=t*h1+(1-t)*h2
, with t = 0.99
. Without loss of generality, suppose we have h0-h(0.99)
part (which is viewable) feeding to the rasterization stage. So we need to generate the corresponding vertex properties of h(0.99)
(same as the output format of vertex shader). My question is how to generate these new vertices' properties?
Update: I try to use t as the interpolate variable to get the vertex properties of h(t), and get reasonable result. I am wondering why the t
from 4D space can get good interpolation result in 3D vertex properties?
I am wondering why the t from 4D space can get good interpolation result in 3D vertex properties?
Because that's how math works. Or more to the point, that's how linear math works.
Without getting too deep into the mathematics, a linear transformation is a transformation between two spaces that preserves the linear nature of the original space. For example, two lines that are parallel to one another will remain parallel after a linear transformation. If you perform a 2x scale in the Y direction, the new lines will be longer and farther from the origin. But they will still be parallel.
Let's say we have a line AB, and you define the point C which is the midpoint between A and B. If you perform the same linear transformation on A, B, and C, the new point C1 will still be on the line A1B1. Not only that, C1 will still be the midpoint of the new line.
We can even generalize this. C could be any point which fits the following equation: C = (B-A)t + A
, for any t
. A linear transformation of A, B and C will not affect change t
in this equation.
In fact, that is what a linear transformation really means: it's a transformation that preserves t
in that equation, for all points A, B, and C in the original space.
The fact that you have 4 dimensions in your space is ultimately irrelevant to the vector equation above. Linear transformations in any space will preserve t
. A matrix transform represents a linear transformation from one space to another (usually).
Also, your original 3D positions were really 4D positions, with the W assumed to be 1.0.
Do be aware however that the transformation from clip-space (4D homogeneous) to normalized-device-coordinate space (3D non-homogeneous) is non-linear. The division-by-W is not a linear transformation. That's one reason why you do clipping in 4D homogeneous clip-space, where we still preserve a linear relationship between the original positions and clip-space.
This is also why perspective-correct interpolation of per-vertex outputs is important: because the space you're doing your rasterization in (window space) is not a linear transformation of the original space output by the vertex shader (clip space). This means that t
is not properly preserved. When interpolating, you usually need to compensate for that in order to maintain the linear relationships of your per-vertex values.