I have a noisy image with several bright spots that I need to identify and analyze. I've tried using simple thresholding techniques, but the noise level is too high, and it's difficult to isolate the spots accurately.
Here is an example of my image with the spots I want to identify:
I am using OpenCV with C++ and would appreciate guidance on the best approach to identify these spots. Specifically, I need help with:
This is what I have tried so far:
#include <opencv2/opencv.hpp>
int main() {
cv::Mat image = cv::imread("image.png", cv::IMREAD_GRAYSCALE);
if (image.empty()) {
return -1;
}
// Thresholding to create a binary image
cv::Mat binaryImage;
cv::threshold(image, binaryImage, 200, 255, cv::THRESH_BINARY);
// Finding contours
std::vector<std::vector<cv::Point>> contours;
cv::findContours(binaryImage, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// Drawing contours
cv::Mat outputImage = cv::Mat::zeros(image.size(), CV_8UC3);
for (size_t i = 0; i < contours.size(); i++) {
cv::drawContours(outputImage, contours, (int)i, cv::Scalar(0, 0, 255), 2);
}
cv::imwrite("output.png", outputImage);
return 0;
}
I published a good dot detector in this patent. The patent is explicitly limited to applications to stained tissue sections, so the method is freely useable for a lot of people.
Here is Python code using DIPlib that implements this dot detector. It should be fairly straightforward to translate this to C++ using DIPlib, and it should be feasible to implement this using a different library, none of the filters used are uncommon. [Disclaimer: I'm an author of DIPlib.]
import diplib as dip
import numpy as np
import matplotlib.pyplot as plt
img = dip.ImageRead('/Users/cris/Downloads/3Kw9Q1sl.jpg')
img = dip.MeanTensorElement(img) # the image posted here is RGB, this turns it into a grayscale image
dot_size = 5 # approximate size of Gaussian that matches the size of dots to detect
smoothing_sigma = 5 # lots of noise here, we need a large sigma
### DOTNESS MEASURE -- https://patents.google.com/patent/US10839512B2/en
shift_distance = int(round((sigma**2 + smoothing_sigma**2)**0.5))
# First for x axis
dxx = dip.ClipLow(-dip.Dxx(img, smoothing_sigma))
dx = dip.Dx(img, smoothing_sigma)
dx_left = dip.ClipLow(dip.Wrap(dx, [shift_distance, 0]))
dx_right = dip.ClipLow(-dip.Wrap(dx, [-shift_distance, 0]))
# Now for y axis
dyy = dip.ClipLow(-dip.Dyy(img, smoothing_sigma))
dy = dip.Dy(img, smoothing_sigma)
dy_left = dip.ClipLow(dip.Wrap(dy, [0, shift_distance]))
dy_right = dip.ClipLow(-dip.Wrap(dy, [0, -shift_distance]))
# Combine
dotness = dip.Power(dxx * dyy * dx_right * dx_left * dy_left * dy_right, 1/6)
### Detect
# Ignore 30 pixels at top and bottom
dotness[:, 0:29] = 0
dotness[:, -30:-1] = 0
# Ignore all local maxima that are not significant enough
mask = dotness > 0.8 # This is the threshold, pick the value you like best!
# Get local maxima
maxima = dip.SubpixelMaxima(dotness, mask)
maxima = np.array([m.coordinates for m in maxima])
# Plot
plt.imshow(img)
plt.scatter(maxima[:, 0], maxima[:, 1], marker='o', s=50, c='none', edgecolors='red')
plt.show()