pythonkinectpykinect

Access Kinect Depth Data with Pykinect


I'm currently working on a project where I need to access and process depth data using the PyKinect library.

What I want to do is to define a depth threshold where I'll do some image segmentation, but since I'm new to PyKinect and I still don't know quite well where to look for resources, I don't know how to access that data and get the values.

I've tried to use the freenect library also, but I cannot get it to work.

Can anyone tell me how to do that or redirect me to some kind of documentation?


Solution

  • I have just created a snippet on my BitBucket account to visualize a depth image with PyKinect and Pygame. Here is the code:

    import thread
    import pygame
    from pykinect import nui
    
    DEPTH_WINSIZE = 320,240
    
    screen_lock = thread.allocate()
    screen = None
    
    tmp_s = pygame.Surface(DEPTH_WINSIZE, 0, 16)
    
    
    def depth_frame_ready(frame):
        with screen_lock:
            frame.image.copy_bits(tmp_s._pixels_address)
            arr2d = (pygame.surfarray.pixels2d(tmp_s) >> 7) & 255
            pygame.surfarray.blit_array(screen, arr2d)
    
            pygame.display.update()
    
    
    def main():
        """Initialize and run the game."""
        pygame.init()
    
        # Initialize PyGame
        global screen
        screen = pygame.display.set_mode(DEPTH_WINSIZE, 0, 8)
        screen.set_palette(tuple([(i, i, i) for i in range(256)]))
        pygame.display.set_caption('PyKinect Depth Map Example')
    
        with nui.Runtime() as kinect:
            kinect.depth_frame_ready += depth_frame_ready   
            kinect.depth_stream.open(nui.ImageStreamType.Depth, 2, nui.ImageResolution.Resolution320x240, nui.ImageType.Depth)
    
            # Main game loop
            while True:
                event = pygame.event.wait()
    
                if event.type == pygame.QUIT:
                    break
    
    if __name__ == '__main__':
        main()
    

    EDIT: The above code shows how to convert depth data to an 8-bit representation (so that they can be easily drawn as a grayscale image). But if you want to use actual depth data, you need to know how they are structured.

    Using the Microsoft Kinect SDK (on which PyKinect is based), a single depth pixel is composed by 16 bits. The 3 less significative ones represent the player index, while I have not well understood the meaning of the most significative one... But let's say that we need to remove the last 3 bits and the first one. For instance, this is an example of what you need to do for each pixel (taken from this question):

    0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 0 - 16 bits number
    0 1 1 0 0 0 1 0 0 0 1 1 1       - 13 bits number
      1 1 0 0 0 1 0 0 0 1 1 1       - 12 bits number
    

    The above operation (removing the last 3 bits and the first one) can be implemented with two bitwise operation on the arr2d array. Because it is a NumPy array, you can proceed as follows:

    def depth_frame_ready(frame):
        frame.image.copy_bits(tmp_s._pixels_address)
    
        arr2d = (pygame.surfarray.pixels2d(tmp_s) >> 3) & 4095
        # arr2d[x,y] is the actual depth measured in mm at (x,y)
    

    Then, you can have the need to display this data, so you will probably need an 8-bit representation. To get it:

    arr2d >>= 4