What image processing code will align two images taken from different cameras at different scales?
In the attached figure, you can see a pair of confocal and widefield fluorescence images of the same neuronal sample. The sample was thus imaged with two different microscopy setups and therefore two different lenses. For the camera settings, the confocal was imaged at 10X resolution, while the widefield fluorescence microscopy used an zoom of 20X.
I’d like to use an automatic image processing method to align the images so I can accurately match corresponding neurons between them. The alignment doesn’t need to be perfect, but it should be reliable enough to establish neuron correspondence. We will validate the alignment initially by eye, i.e., human judgment.
I know this could be quite challenging, but what image processing algorithms will align the two images?
An algorithm could find the affine transform from one image to the other image based on image features.
EDIT: I am asking for a way to perform fully automatic alignment, with no manual input. In the previous version I added boxes that showed corresponding neurons just to illustrate that the images are of the same tissue.
Matching features pointed out:
In my testing with the two sample images, it seems like Akaze may be sufficient for at least the first order fit. I already had a project with OpenCV integrated, so I used that.
The three images below are a screenshot from a custom app. The top two images
confocal image, with color circles drawn by my software, and centered on (x,y) coordinates I found matched your color annotation reasonably well
fluorescence image, with color circles on hand-picked (x,y) centers matching yours
points from confocal image drawn as circles, and with those points transformed from the confocal image to the flourescence image, and drawn as squares
I used the (x,y) coordinates for the confocal image and fluorescence images JUST to draw the graphics, but I didn't use those coordinates to find the transform from confocal image to fluorescence image. That transform was found automatically using Akaze feature detectors and OpenCV's findHomography(..) function.
https://docs.opencv.org/4.x/db/d70/tutorial_akaze_matching.html
The process is roughly the following:
Have OpenCV (or similar library) supporting Akaza and homography-finding loaded in your project.
Load confocal image as image 1
Load fluorescence image as image 2 -- it's okay that the image size is different
For testing, identify matching features in image 1 and image 2 -- these manually selected point sets are used solely for drawing graphics to check processing. You can have an arbitrary number of points.
Run Akaze to find features in the image, match features in image 1 and image 2, and find the homography between image 1 and image 2
Load an additional copy of the fluorescence image (image 2)
Draw circles (or whatever) for your image 2 manually selected points.
Draw rectangles (or whatever) for the image 1 manually selected points transformed via the homography from image 1 to image 2
Compare the manually selected points for the fluorescence image and the locations predicted via the homography mapping the confocal image to the fluorescence image
(Preferred) Given the initial reasonable transform from image 1 to image 2, run localized normalized correlation or some other feature within (say) each 100x100 pixel subregion to yield more accurate fits.
It's been a LOOONG time since I've used ImageJ, but there may be an AKAZE (or SIFT, ORB, or similar) feature matching algorithm that'll give you a homography simplying by passing in your two images.
My code is written in Swift rather than Python, and relies on lots of custom code I've used for other purposes, but I hope these steps are enough to give you a sense of how to proceed.
A few additional considerations:
The process outlined above worked okay as a quick implementation, but you'd have to consider how accurate your results need to be.
The screenshot doesn't show the confocal image transformed to overlap the fluorescence image so that you could check how neatly the features overlap all across the image. That'd take a chunk more time, but that's what I'd implement next as a debug tool for quick eyeballing of results.
There was just one unlabeled confocal image and one unlabeled fluorescence image. Preferably, even the earliest stage of your testing of an automated solution would include dozens if not hundreds of image pairs.
Automated testing would given you a sense of the quality of match for each image pair. That's a whole 'nother topic, complicated in part by the difference in sharpness of features in confocal and fluorescence images.
If there's value in minimizing whatever error you can for registration (alignment) of confocal image to fluorescence image for many, many image pairs, then you'd want to step back and examine sources of error starting with image capture, lens selection, etc. Even if you're using pricey confocal microscopes and such, it's possible you could find a few additional ways to eliminate error specific to your application. (That aside, recent confocal microscopes of recent years are much nicer, with much better software, then I remember from many years earlier.)
Even if you create a solution that's fully automated, spending time on the GUI to minimize the need for training could make the whole process easier for users to bear. But that's also a whole 'nother subject.