So I know this question has been done before but most of the other time it was still when both OpenNI and Libfreenect where being diveloped. My question are:
1)I want to know it what state the are now. 2)The differences between this two (pros, cons and anything else) 3)Specifically for skeleton tracking, which is better and give more data about the skeleton (for example in Microsoft SDK they give data for 20 joints, is it the same in this two, more, less?)
Libfreenect is mainly a driver which exposes the Kinect device's features: - depth stream - IR stream - color(RGB) stream - motor control - LED control - accelerometer
It does not provide any advanced processing features like scene segmentation, skeleton tracking, etc.
On the other hand, OpenNI allows generic access to Kinect's feature (mainly the image streams), but also provides rich processing features such as: - scene segmentation - skeleton tracking - hand detection and tracking - gesture recognition - user interface elements etc. but no low level controls to device features like motor/LED/accelerometer.
As opposed to libfreenect which AFAIK works only with the Kinect sensor, OpenNI works with Kinect but with other sensors as well like Asus Xtion Pro, Carmine, etc.
You've mentioned the Kinect SDK. It's good to bare in mind the are multiple Kinect sensors: - Kinect for Xbox - Kinect for Windows The Kinect for Windows sensor for example allows a close mode and has a longer range. I don't know how the skeleton tracking differs. Also, there is a MS Kinect-OpenNI bridge bridge project and OpenNI2 works plays nice with Kinect