computer-visionobject-detectiondisparity-mapping

Is it possible to create / use the V-disparity map with data from a Time of Flight sensor (instead of normally used RGB-D Stereo Vision approach)?


I am doing my master thesis regarding floor based obstacle detection with a Time of Flight (ToF) camera.

I found out there are alot of applications that use the V- and U- Disparity map to detect and track objects and the ground plane with a stereo vision approach. They calculate the disparity with the two pictures taken and then create a Histogram of the values so in the V- Disparty map the ground plane is visible as slanted line and obstacles stand out from it. So my question is, if it is possible to generate the disparity map from the data from a time of flight camera? As far as i know those things give me back a point cloud (x,y,z coordinates from each pixel) and a amplitude image of the scene.

So the depth for the disparity in stereo vision is calculated like this:

depth = (baseline * focal length) / disparity)

A ToF camera has an objective and therefore it is using the pin hole approach to calculate the right depth. So is there any posibillity to gain a disparity map with an ToF camera?

Thanks in advance!


Solution

  • TL;DR: No, you can't generate a disparity map from a time-of-flight camera.

    I have not used many time-of-flight cameras, but the ones I have used have given me uint16 matrices. The shape of the matrices were X by Y with the uint16 values corresponding to the distance from the camera in millimeters. This is not a point cloud; it is a depth map.

    Since there is only one camera, there is no disparity and thus no disparity map, but I think you know that.

    To create a disparity map from the depth map, I assume you could just make up some fake distance between the cameras (baseline) and rearrange your equation from there. So it would be disparity = (fake_baseline * focal_length) / depth. From here you could calculate your U and V disparity maps like usual.

    Since your baseline will be a made-up number, I have a hunch that this wouldn't be useful for ground plane detection nor object avoidance. Thus, I think you would be better off just using the depth map from the time-of-flight camera as-is.

    I have not tried this before, nor ever used u-disparity maps before, so take my answer with a grain of salt (I could be wrong).