My intention is to build a prototype app for a FEMA contracting company that will allow the public and workers to have augmented-reality mapping and navigation at the temporary camp installations.
Some tango apps are available now for the data capture portion. Phi.3d has an app I plan to use to get the data capture(photo-realistic image overlay on the point cloud data).
So I need to find a developer app that will allow me to take the captured scan data, possibly with meshing post process for modeling and then index to a map for input into the AR solution for the end user app.
How do I index the data to a map? What developer app could I build this with?
I will answer questions for clarification if helpful.
The question is what kind of augmentations you need. Unity3D is a good choice for gaming and graphically impressive 3D augmentations. If the augmentations are rather functional or only routes you can also try to use the Tango Java API with a 3D rendering engine like Rajawali. The real problem will be mapping the augmentations to the real world. For that you would need some kind of digital representation of the surroundings.
To use Tangos area learning feature somebody would have to "learn" the environment and create the mapping to the augmentations. Also keep in mind that sunlight / infrared light can block Tangos depth perception features, so it will not work well if you use it outdoors.
We did a proof of concept for indoor navigation that can quickly be set up with tango. Check out our bar camp talk at droidcon london (need a skills matter account to watch it though).
http://uk.droidcon.com/skillscasts/9311-indoor-navigation-with-google-tango