I have a setup of 6 cameras (60° horizontal and vertical FoV) with the same center looking outwards with a horizontal rotation of 30 degrees. I am projecting the images onto a sphere and the cameras are located in the center of which, see sketch below.
I am stitching the images together close to this diagram from opencv, but because I know the orientation of the cameras I skip the "registration" part and (for now) omit the blending of the images.
My problem is, that the masks returned by the SeamFinder are nonsense. I think it should work, because if I use NoSeamfinder
in the simulation the results look good, but I tried every other Seamfinder OpenCV provides and the results are as shown below:
I have worked with the stitching_detailed.cpp example from opencv to set up the pipeline but I have no idea what might be the issue. I thought that maybe, because I am working within a simulation, the gradients in the images are too sharp to work with, so I applied a Gaussian blur, which did not change anything.
I have some code snippets below and can post more on demand, but I want to avoid just dumping a whole load here.
I work with cv::Mat
in the code and transform them to cv::UMat
for the SeamFinder.
std::vector<cv::Mat> warped_images_mat; // the single images are warped according to the sphere
std::vector<cv::UMat> warped_images_umat;
std::vector<cv::Mat> warped_masks_mat; // Masks of the warped images within the sphere image
std::vector<cv::UMat> warped_masks_umat;
std::vector<cv::Mat> warped_masks_fixed_mat; // These masks are calculated once and used to refresh the masks for the SeamFinder
std::vector<cv::UMat> warped_masks_fixed_umat;
std::vector<cv::Point> corners; // position of the imgs and masks in the sphere image
cv::Ptr<cv::detail::SeamFinder> seam_finder = cv::makePtr<cv::detail::GraphCutSeamFinder>(cv::detail::GraphCutSeamFinderBase::COST_COLOR_GRAD);
// within the ROS2 callback:
this->warped_masks_umat.clear(); // clear the old masks
for(size_t cam_idx = 0; cam_idx < 6; cam_idx++){
int cols = this->warped_images_mat.at(cam_idx).cols;
int rows = this->warped_images_mat.at(cam_idx).rows;
// std::cout << "rows: " << rows << "; cols: " << cols << std::endl;
for (int col = 0; col < cols; col++)
for (int row = 0; row < rows; row++)
if (this->warped_masks_fixed_mat.at(cam_idx).at<u_char>(row, col) != 0)
// code to refresh warped image
this->warped_images_mat.at(cam_idx).getUMat(cv::ACCESS_RW).convertTo(this->warped_images_umat.at(cam_idx), CV_32F);
this->warped_masks_umat.push_back(this->warped_masks_fixed_umat.at(cam_idx).clone());
}
seam_finder->find(this->warped_images_umat, this->corners, this->warped_masks_umat);
I mixed up the notations from opencv. The .x
and .y
from cv::Point
were exchanged. It worked like this for refreshing the incoming images but were the wrong way round for the SeamFinder
.
I noticed this when I tried to reproduce an example from the comment I got and noticed a distinctive shape in the masks. It works now also with the full masks, without cropping anything.