imagepoint-clouds3d-reconstruction

3D point cloud of a space from a sequence of images with known camera positions and orientations?


I have around 15000 images of a closed space with known camera position and orientations. I also have intrinsic camera properties. Using these images I want to construct a 3d version of this space. All the papers and algorithms I found searching the web tries to estimate the location and orientation parameters as well. Before using any of those algorithms I decided to ask here since I have definite parameters of the camera for all the images and I want to use this data while constructing the 3d space.

Edit: The algorithms for Structure from Motion always assume we do not have the motion data. I have the motion data at hand. So problem is changed here but I cannot find the name of this problem.


Solution

  • Yes, these algorithms usually jointly estimate the structure and the camera poses. However, if you already have camera pose estimates in which you are highly confident, you can either

    1. Use them as an initialization to those algorithms, and associate them with a low covariance (or just by adding prior factors on some of them). You can do that each time you add a new frame to the backend.

    2. The backend (or optimiser) those algorithms use is usually flexible enough so that you can set a few parameter blocks to constant. For example, in the open-source optimizer ceres, you can just use

      void Problem::SetParameterBlockConstant(double *values)

      even if your backend doesn't have such functions, it is just a matter of setting the corresponding gradient with respect to these parameters to zero and eliminating the corresponding blocks from the Hessian matrix when you solve the system.