c++graphicsrenderinghomogenous-transformation

3D to 2D projection using view frustum has issues with translation


I need to create this software rasterizer that given the projection (P), view (V) and model (M) matrices, can create the 2D image of a point cloud (pc) from the given point of view in a bitmap format (a monochrome bitmap).

I've got the math down (and things seem to be working for the most part):

  1. Transform the point cloud's points pc' = (P x V x M) x pc (note that the point cloud is already in homogeneous system)
  2. For each point, divide all components by its w (while being careful to discard points that have w close to zero.
  3. Discard points that fall outside the view frustum (by extracting the frustum planes from the P using the method described here)
  4. Transform x and y coordinates of each point to screen coordinates using (x + 1) * imageWidth / 2 and (-y + 1) * imageHeight / 2 (to have the correct y coordinate).
  5. Map the resulting x and y coordinates to bitmap linear index using (int)y * imageWidth + (int)x (with bound-checking).

It seems that everything works fine: I get the exact bitmap as if I were rendering it with OpenGL, rotating the point cloud by an arbitrary quaternion still gives valid results.

Things are good until I have a translation component in matrix M! As soon as I have the slightest amount of translation, the image breaks: the point cloud gets heavily distorted (as if a non-affine transform has been applied to it). It doesn't matter along which direction the translation is applied, ANY translation messes everything up to the point that the point cloud is not recognizable anymore. At first I though my model matrix was transposed (resulting in a non-affine transformation), but that doesn't appear to be the case.

I could post some code if needed, but given the above overview, am I missing anything?? Is there any special consideration that may be needed??


Solution

  • The problem was so silly that I'm ashamed of wasting this much time.

    Turned out that some of the points in my point cloud had wrong w components. I wasn't running into any issues on OpenGL side because the shader was manually setting all ws to 1. On the rasterizer side, the wrong w's caused points that were at longer distances to the camera to project at wrong perspective locations.

    The test spheres that I used didn't have any problems because they had the right w components.

    Edit:
    Just thought I'd also mention this: there's no need to extract view frustum planes to determine whether projected points fall inside the view frustum or not. One can simply perform this check by determining whether all x', y' and z' components in a transformed point (x', y', z', w') (i.e. after multiplying by matrix P x V x M) fall in the range w' and -w'. If all three components fall in that range, that point is visible, otherwise that point is outside the view frustum.