I'm going to run a project; to reconstruct my room with kinect.
In what format the 3d reconstructed view will be saved ?
could it be save or convert to 3dm, 3ds, max, obj and etc ?
thanks!
You can easily save 3d coordinates to the PLY format. Here's a basic example using ofxKinect:
void exportPlyCloud(string filename, ofMesh& cloud) {
ofFile ply;
if (ply.open(filename, ofFile::WriteOnly)) {
// write the header
ply << "ply" << endl;
ply << "format binary_little_endian 1.0" << endl;
ply << "element vertex " << cloud.getVertices().size() << endl;
ply << "property float x" << endl;
ply << "property float y" << endl;
ply << "property float z" << endl;
ply << "end_header" << endl;
// write all the vertices
vector<ofVec3f>& surface = cloud.getVertices();
for(int i = 0; i < surface.size(); i++) {
if (surface[i].z != 0) {
// write the raw data as if it were a stream of bytes
ply.write((char*) &surface[i], sizeof(ofVec3f));
}
}
}
}
You can then use MeshLab to process/stich PLY files and then export them another format like OBJ. Related to openFrameworks, you can find a few handy examples including the above PLY export in this workshop.
Saving to PLY would solve a part of the problem and you'd need to manually stich which can be time consuming. You would need something like SLAM(Simultaneous Localization And Mapping) or other reconstruction algorithms to help stitch things together. You can find a nice collection of algorithm on OpenSLAM.
Now depending on your level of comfort with coding there a few options to help with that. I also recommend having a look at the RGBDemo built software which has a reconstruction feature. This requires no coding, unless you want to (as it's opensource).
With a bit of coding you can also do reconstruction using the PointCloudsLibrary(PCL). If also includes an implementation of KinectFusion
If you're using the Microsoft Kinect SDK, Kinect Fusion was integrated into Kinect SDK 1.7
You might also find this post interesting: Kinect Fusion inside AutoCAD.