opencvpoint-cloudscolor-depth

Generating 3D point cloud with predicted Depths


I was trying to generate a 3D point cloud (PC) from an image with predicted depths. The camera intrinsics and the ground truth depth images are given. Firstly, I am generating a PC with the GT depth using the camera intrinsic and it looks like this: PC with GT Depths

But, when I try to generate the PC for the same image with the predicted depths, the PC looks weird. Here is the PC with the predicted depths: PC with Predicted Depths

I am using the same camera intrinsics for doing this. I am using the same code and procedure for both the PC generations. I was expecting two PCs to be close but what I am getting is really weird. What am I doing wrong?

My code for generating the point cloud is as follows:

int rows = RGB.size[0];
int cols = RGB.size[1];
for (int v = 0; v < rows; v++) {
    for (int u = 0; u < cols; u++) {
        auto z = depth.at<ushort>(v, u) / 5000;
        auto x = (u - intrinsics.cx) * z / intrinsics.fx;
        auto y = (v - intrinsics.cy) * z / intrinsics.fy; 

        // std::cout<<"x = "<< x << " y = " << y <<std::endl;
        point3d << x, y, z;
        pc.vertices.push_back(point3d);
        pc.colors.push_back(RGB.at<cv::Vec3b>(v, u));

    }
}

The GT depth image: GT Depth Image The predicted depth image: Predicted Depth Image

Edit: I found the mistake. The depth values were scaled by 5000. So, I missed that part and didn't divide the value of z while constructing the point cloud. After dividing by 5000, the problem was resolved.


Solution

  • The depth value should have been divided by 5000 while constructing the 3D scene as the depth values were scaled by 5000 originally.

    For details The camera intrinsics and guide on how to construct the 3D point cloud