I'm working on a project where i have to detect objects (small cars on railroad) from two cameras with bird-eye's non-overlapping views (see below two image)
As you can see the cars are detected and their Centroid-coordinates are returned.
However, i'm trying to transform the two views onto one image that represents only the railroad on which the car drive.
Therefore for the simulation, i have created the destination image which only has the railroad (the black trajectory shown in the above image ) as follows:
After doing some research i landed by a method from openCV called: cv2.findHomography()
whcih finds the homography matrix between two planes.
The two images that come from the two cameras respectively have resolution 1280x720. For the destination image, it has 1440x480 resolution.
my code is written as follows:
import numpy as np
import cv2
def Perspective_transf(src_point,h):
a = np.array([src_point])
a=np.array(a.transpose())
a=np.vstack((a,np.array(1)))
a_transformed_homo = np.dot(h,a)
scale_factor=a_transformed_homo[2][0]
a_transformed_euk=np.divide(a_transformed_homo,scale_factor)
return a_transformed_euk
# Source points: from camera image which are the same as the two cameras have the same resolution
pts_src=np.array([[0,0],[1280,0],[720,1280],[0,720],[640,360]])
#destination correspondences of the pts_src on the destination image (for camera 1)
pts_dst1=np.array([[0,0],[720,0],[720,480],[0,480],[360,240]])
#destination correspondences of the pts_src on the destination image (for camera2)
pts_dst2=np.array( [[720,0],[1440,0],[1440,480],[720,480],[1080,240]])
#homography between the first camera image plane and the destination image
h1, status1 = cv2.findHomography(pts_src, pts_dst1)
#homography between the second camera image plane and the destination image
h2, status1 = cv2.findHomography(pts_src, pts_dst2)
Now after that i estimated the Homographes, for every detected centroid, i can project(transform) it on the destination image using the homographes.
When i run my code, i get the following result:
As you can see the trajectories created from transforming the detected centroid of the car driving from one camera field of view into the other camera view is not aligned with defined trajectory that simulate the railroad and is rotated compared to the image.
So what i'm i doing wrong and why my results look like so?
Thanks in Advance
Khaled Jbaili
I solved it just by adding more pair of points. This way the back-projection error is minimized.