3dgpsphotogrammetry

photogrammetry with gps to generate 3d textured cloud


As 3d reconstruction using 2d images is a ultra hard topic and writing your own application to do this is not only a challenge but also a waste of time (from what i am reading) i would like to ask how about having images with GPS data?

Imagine a drone flying around an object taking photos for 3d reconstruction, let's make cloud of points in 3d.

Will that help at all? Knowing position of the 2d images and course- will that make it easier to code an application that will convert these informations together with RGB data to 3d model/cloud of points?


Solution

  • As usual, it is all about your goal. If you want to have fun, you can easily get results with cm/m accuracy without much effort, if you aim for accurate results the amount of information you need to process and to implement rises exponentially with your expectation.

    Most people here don't have any experience in photogrammetry, which means that you have to treat their answers more like an personal opinion and not something worth to rely on.

    Separate at this point between photogrammetry and computer vision.

    If you do computer vision, it is quite easy to to convert 2d images into a 3d point cloud. All necessary algorithms are already written in libraries like OpenCV. If you want to start from the scratch it will take you more time but more or less you'll end up replicating the stuff in OpenCV.

    The routines in OpenCV are fast but inaccurate. You'll probably achieve not more than (mm/m to cm/m) when it comes to real world accuracy. They're more like mathematical optimization which means: "Fit something somewhere. If the in-sample error result is okay, everything is fine." This is okay for fun applications, but they're never used as they are in the professional field. So never try to sell OpenCV results as real world accuracy, you'll commit a fraud.

    Writing good photogrammetry applications is quite hard because all of a sudden you have to think about temperature gradients and external accuracy which has nothing to do with the Back Projection Error. You'll also need to design your targets according to your task since there is no use for SIFT targets in photogrammetry, they're too inaccurate. The lens has to be described with physical parameters and the whole optimization process needs to be carried out in multiple steps to avoid certain systematic errors.

    So if you don't need to be accurate, go for CV algorithms and use existing libraries like OpenCV which should be quite easy if you have a solid programming background. For photogrammetric tasks, which aim to achieve real world accuracies < 50µm/m you need to invest much more time.

    So can GPS help? If you want to have your 3D model in a certain reference Frame like ERTS89 and there is no way to find some existing coordinates for certain points, then yes.

    Another use would be if you want to check the displacement between images as control values to avoid gross errors or as initial values for the Taylor series of the collinearity equations, they might help.

    On the other hand, standard non-differential GPS receiver which are light enough for UAVs have a pretty bad accuracy of 15m in the 3D position, so you have to fly high for the error being small enough to be useful.