computer-vision3d-reconstructionslam

how to merge point cloud, after triangulation


I am working on Structure from Motion. I have did the following steps till now.

  1. Feature Matching
  2. Fundamental Matrix
  3. Essential Matrix
  4. Camera Matrix P
  5. From triangulation, I got Point3d type values for all the matched features. I stored this in pointcloud variable.
  6. Bundle adjustment, optimize the pose and pointcloud.
  7. Add more views to reconstruct.

The problem occurs at 7, such as having 3 images i.e. 1,2,3. And point_1 correspondences to point_2. And point_2 correspondences to point_3. point_1, point_2, point_3 in image_1, image_2, image_3 respectively.

After triangulation, point_1 and point_2 get result worldPoint_1 point_2 and point_3 get result worldPoint_2

wordldPoint_1 and worldPoint_2 should be the same because Point_1,2,3 are the same observation of real-world point. But, because noise exists, worldPoint_1 and worldPoint_2 are not equal.

So my question is that how to merge point cloud after add new image to reconstruct and do triangulation.


Solution

  • Triangulating separately and then merging is not a good idea, since if one of the triangulations is wrong, how can you tell? Instead, you should triangulate from the three points simultaneously. I am assuming that you have defined some sort of least squares problem for each triangulation like

    `argmin_{depth} D(ray_1)+D(ray_2) //for image_1 an image_2`
    

    where ray_i is the backprojection of point_i, i.e. inverse(calibration_matrix)*point_i, and where D(.) gives you the distance of the 3d point to a ray.

    I think you should try

    argmin_{depth} sum_j D(ray_j) //for all your views image_1, ... , image_N

    This way, you can try to add an M-estimator to filter out bad measurements.