androidcameraprojectiongoogle-project-tango

Tango: Why Intrinsic data from RGB camera and not from depth camera for point cloud projection?


I was reading in the forum and this post caught my attention, since I had to perform the same operation. The accepted answer although uses the intrinsic data of the RGB Camera, which I do not understand. Why not use the intrinsic data of the depth camera for the projection to the image plane, since the point cloud is constructed with the depth camera?

(I started this question as a new one because I have not enough reputation to write it as a comment)


Solution

  • The answer depends on what you want as the depth image. The accepted answer is assuming the desired depth map is viewed through color camera.

    The easy way to think about it is that point cloud is just a bunch of points in 3D space, how it projects onto a image plane depends on the viewing camera. If you want the viewing camera to be depth camera, you could use depth camera's intrinsics. But more commonly, people would want to use color camera, because it's easier for color look up and etc.