I am working on a project and I need to give a small humanoid robot (a Nao bot) depth perception. I am planning on wiring in a Kinect to the bot's forehead and integrating it with the robot's current operating and guidance system (the default system called OPEN NAO) which runs on Linux and relays to the bot with wifi.
Right now I am fumbling over which software to use. I have looked at the Point Cloud Library which I see is for processing of the actual data, OpenNI which is defined as an API framework to help applications access natural interaction devices such as the Kinect, and then there is the official Kinect SDK. I'm just not sure how they all fit together.
Which of these libraries/frameworks do I need to integrate Kinect into the robot's operating system?
I would suggest you go with OpenNI + PCL.
You are right that PCL is a data processing library. It is generally very well documented, and it has an interface into OpenNI already: http://pointclouds.org/documentation/tutorials/openni_grabber.php
OpenNI is the device driver; that is, it pulls information from the kinect. PCL has an interface into this library. Actually, OpenNI generally comes in two parts: the OpenNI framework, and the driver for the particular sensor you use, in your case for the Kinect (this is called the PrimeSense sensor module). These will need to be installed separately from PCL. On some linux distributions they can come prepackaged, but if not you might want to try installing from source: http://openni.org/Downloads.aspx
I think KinectSDK could do the same job as OpenNI in theory but PCL has an interface into OpenNI, and anyway I am unsure whether KinectSDK works on Linux variants.
I hope this is helpful. Someone more familiar with the Nao might be able to shed more light.
Best wishes
Damien
EDIT: