I am using the Intel RealSense L515. I want to align the depth and color images in two different numpy arrays so that their resolution is the same.
Here is my code
import pyrealsense2 as rs
import numpy as np
pc = rs.pointcloud()
pipe = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
pipe.start(config)
try:
frames = pipe.wait_for_frames()
depth = frames.get_depth_frame()
color = frames.get_color_frame()
# These are the two different frames to align
depth_image = np.asanyarray(depth.get_data())
color_image = np.asanyarray(color.get_data())
# The new color image
# color_image_with_same_resolution = ?
finally:
pipe.stop()
The color image has a greater resolution than the depth image. What would be the most efficient way to resize the color image so that it has the same dimensions as the depth image and each color pixel is mapped to the correct corresponding depth pixel. Speed and efficiency is critical in this scenario.
Essentially I want to be able to save two separate arrays that can be used in another program. The arrays must have the same dimension like this: (Z - Array 640x480x1) and (RGB -Array 640x480x3)
The solution is very simple. When using the pyrealsense2 library its also very fast!
import pyrealsense2 as rs
colorizer = rs.colorizer()
align = rs.align(rs.stream.depth)
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 960, 540, rs.format.rgb8, 30)
profile = pipeline.start(config)
frameset = pipeline.wait_for_frames()
frameset = align.process(frameset)
aligned_color_frame = frameset.get_color_frame()
color_frame = np.asanyarray(aligned_color_frame.get_data())
depth = np.asanyarray(frameset.get_depth_frame().get_data())