unity-game-enginegoogle-project-tango

Sync depth and color


I am generating a depth map from the point cloud points, and to fill in the missing data I want to look at the closest matching color frame. I am able to generate a depth image (I generate it when OnDepthAvailable is called), and I see that the TangoAR unity example gets the color frames when OnExperimentalTangoImageAvailable is called.

This is from the TangoAR unity example:

    /// <summary>
/// This will be called when a new frame is available from the camera.
///
/// The first scan-line of the color image is reserved for metadata instead of image pixels.
/// </summary>
/// <param name="cameraId">Camera identifier.</param>
public void OnExperimentalTangoImageAvailable(TangoEnums.TangoCameraId cameraId)
{
    if (cameraId == TangoEnums.TangoCameraId.TANGO_CAMERA_COLOR)
    {
        m_screenUpdateTime = VideoOverlayProvider.RenderLatestFrame(TangoEnums.TangoCameraId.TANGO_CAMERA_COLOR);

        // Rendering the latest frame changes a bunch of OpenGL state.  Ensure Unity knows the current OpenGL state.
        GL.InvalidateState();
    }
}

However I want the frame right after the depth frame, not the latest frame available.

How can I sync the two as close as possible? Looking at the C RBG Depth sync example didn't help me. I understand that depth and color use the same camera, and that it can't do both at the same time (1 depth for every 4 color frames).


Solution

  • There are multiple solutions for this problem, one of them is as bashbug suggested, buffer the frames and look up texture based on closest timestamp.

    What I would suggest does not require buffering, and it is Unity specific:

    1. Turn on the mesh render on PointCloud, and set the PointCloud object to a specific layer.

    2. Render PointCloud uses the shader suggested in this doc, but make sure you turn the color mask off, so the point cloud doesn't render to color buffer.

    3. Create a RenderTexture and with the depth format (as the previous doc suggested)

    4. Create a camera with the same intrinsic (projection matrix) as the TangoARCamera, and only render the PointCloud's layer. This camera will be our depth camera (camera only renders PointCloud).

    5. Set the depth camera as a child object of the TangoARCamera object with no offset. This makes sure the camera has the exact same motion (position and orientation) as ARCamera.

    6. Create another render texture with RGBA format, and assign this texture to the TangoARCamera to get the color image.

    7. Now you have two textures: depth and color. They are guaranteed to be sync'ed, because in UnitySDK, the PointCloud prefab actually project point in the world space, and TangoARCamera`'s transform is updated based on color camera timestamp.