pythonopencvrospoint-cloud-librarylidar

Generate image from an unorganized typical scan-pattern lidar's point cloud data


I hope you guys doing well I have a LiDAR which is Livox Mid 70. Which have a scan pattern like this. scan_pattern, which is depends on the time and create the whole scene.

I used ros to fetch the data from a perticular topic and create the numpy array.

def callback(data):

    pc = rNp.numpify(data)
    points = np.zeros((pc.shape[0], 4))
    points[:,0]=pc['x']
    points[:,1]=pc['y']
    points[:,2]=pc['z']
    points[:,3]=pc['intensity']
    po = np.array(points, dtype=np.float32)

Then I create a (x, y) array which is contains X and Y coordinates of that pointcloud data and try to scale it like this:

p = (arr/np.max(arr)*255).astype(np.uint8) #arr = (x, y) numpy array

But unfortunately it's not giving me any understandable picture

Then I tried the ros command:

rosrun pcl_ros convert_pointcloud_to_image input:=/livox/lidar output:=/img

but the error msg is:

[ERROR] [1651119689.192807544]: Input point cloud is not organized, ignoring!

I saw some technique on matlab i.e. pcorganize, but to use this, I need to give it some parameters like

params = lidarParameters(sensorName,horizontalResolution) params = lidarParameters(verticalResolution,verticalFoV,horizontalResolution) params = lidarParameters(verticalBeamAngles,horizontalResolution) params = lidarParameters(___,HorizontalFoV=horizontalFoV)

But this Lidar don't have any horizontal or vertical resolution, beam angles so may be I can't use this function to organized this pcl data.

My question:

  1. How to organize these unorganized pcl data and create image from it?
  2. Is it possible to view this image from cv2.imshow()?

Solution

  • Answers:

    1. Buffer the PointCloud2 messages from LiDAR in a written node (best python), if it's needed (only when scan pattern needs time to run a whole scan for example 2s for a whole scan). Take a look at Velodyne LiDARs they don't need it, because there plane scans LiDARs, i.e. complete detailed image from the first second of running. As I mentioned it's only necessary, when your LiDAR has a Lissajous scan pattern (https://www.nature.com/articles/s41598-017-13634-3) or another, which needs the time to do a complete scene scan.

      Case 1: scan pattern needs the time: custom buffering time (1s, 2s or 4s...) - compared to your attached scan pattern). Then you have a whole scan.

      Case 2: scan pattern doesn't need any buffering time: rich detailed scan from beginning, normally buffering isn't necessary

      In the next step you should use this node: https://github.com/mjshiggins/ros-examples/blob/master/src/lidar/src/lidar_node.cpp

      This node takes your pointcloud2 message and generates a image from bird's eye view. I used this node in my thesis and change a few things in the for easier understanding and better solution. I could share this file with you or if you want to explore this node yourself and need help from time to time for example for changing LiDAR position, changing cell_resolution (zoom of the top view to your image), image color and so on. Also I add the coding of the Image color channels with given data from LiDAR: Intensity, Density, Height (only height coding in original code).

    2. With OpenCV you can look at this image as a opened window or visualize it as published topic in RViz.