Is there an example somewhere that tells you how to visualise the RGBD information coming out of the RGBD sensor. Furthermore, is there a way of getting a more detailed rendering of the scene than meshcat? I am happy with using meshcat to debug and analyse kinematics and dynamics, however, if I want to use the RGBD sensor information to train a model I was hoping to get something more realistic. Any pointers?
First, some good news: A tutorial is currently in development to help guide you in producing better images.
The better news, even without the tutorial, the functionality is in place in which you can produce better images. You'll want to do two things to make pretty pictures:
RenderEngineVtk
appropriately.
Finally, to see the images you have two primary means:
backend = GLX
and set your cameraConfig to have show_rgb = true
. This will cause a window to pop up which displays the rendering frame buffer.
RgbdSensor
in your diagram.
ColorizeDepthImage
or ColorizeLabelImage
system before writing to disk for depth and label images, respectively.