algorithmimage-processingcomputer-visionaffinetransform

Aligning confocal and widefield fluorescence images


What image processing code will align two images taken from different cameras at different scales?

In the attached figure, you can see a pair of confocal and widefield fluorescence images of the same neuronal sample. The sample was thus imaged with two different microscopy setups and therefore two different lenses. For the camera settings, the confocal was imaged at 10X resolution, while the widefield fluorescence microscopy used an zoom of 20X.

I’d like to use an automatic image processing method to align the images so I can accurately match corresponding neurons between them. The alignment doesn’t need to be perfect, but it should be reliable enough to establish neuron correspondence. We will validate the alignment initially by eye, i.e., human judgment.

I know this could be quite challenging, but what image processing algorithms will align the two images?

An algorithm could find the affine transform from one image to the other image based on image features.

EDIT: I am asking for a way to perform fully automatic alignment, with no manual input. In the previous version I added boxes that showed corresponding neurons just to illustrate that the images are of the same tissue.

Confocal imaging

Confocal imaging

Widefield fluorescence image

Widefield fluorescence image

Matching features pointed out:

3


Solution

  • In my testing with the two sample images, it seems like Akaze may be sufficient for at least the first order fit. I already had a project with OpenCV integrated, so I used that.

    The three images below are a screenshot from a custom app. The top two images

    1. confocal image, with color circles drawn by my software, and centered on (x,y) coordinates I found matched your color annotation reasonably well

    2. fluorescence image, with color circles on hand-picked (x,y) centers matching yours

    3. points from confocal image drawn as circles, and with those points transformed from the confocal image to the flourescence image, and drawn as squares

      2 raw images and processed image

    I used the (x,y) coordinates for the confocal image and fluorescence images JUST to draw the graphics, but I didn't use those coordinates to find the transform from confocal image to fluorescence image. That transform was found automatically using Akaze feature detectors and OpenCV's findHomography(..) function.

    https://docs.opencv.org/4.x/db/d70/tutorial_akaze_matching.html

    The process is roughly the following:

    1. Have OpenCV (or similar library) supporting Akaza and homography-finding loaded in your project.

    2. Load confocal image as image 1

    3. Load fluorescence image as image 2 -- it's okay that the image size is different

    4. For testing, identify matching features in image 1 and image 2 -- these manually selected point sets are used solely for drawing graphics to check processing. You can have an arbitrary number of points.

    5. Run Akaze to find features in the image, match features in image 1 and image 2, and find the homography between image 1 and image 2

    6. Load an additional copy of the fluorescence image (image 2)

    7. Draw circles (or whatever) for your image 2 manually selected points.

    8. Draw rectangles (or whatever) for the image 1 manually selected points transformed via the homography from image 1 to image 2

    9. Compare the manually selected points for the fluorescence image and the locations predicted via the homography mapping the confocal image to the fluorescence image

    10. (Preferred) Given the initial reasonable transform from image 1 to image 2, run localized normalized correlation or some other feature within (say) each 100x100 pixel subregion to yield more accurate fits.

      It's been a LOOONG time since I've used ImageJ, but there may be an AKAZE (or SIFT, ORB, or similar) feature matching algorithm that'll give you a homography simplying by passing in your two images.

    My code is written in Swift rather than Python, and relies on lots of custom code I've used for other purposes, but I hope these steps are enough to give you a sense of how to proceed.

    A few additional considerations: