I get several app icons and resize to 36*36. I hope to get similarity between any two of them. I have made them black and white with opencv function threshold. I follow instruction from other questions. I apply matchTemplate with method TM_CCOEFF_NORMED on two icons but get a negative result, which makes me confused.
Based on doc there should not be any negative number in result array. Could anyone explain to me that why I get a negative number and does this negative make sense?
I failed one hour for trying edit my post with code indent error, even if I remove all code part from my edit. That's crazy. I have tried both grayscale and black&white of icon. When two icons are quite different, I will always get negative result.
If I use original icon with size 48*48, thing goes well. I don't know whether it is related with my resize step.
#read in pics
im1 = cv2.imread('./app_icon/pacrdt1.png')
im1g = cv2.resize(cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY), (36, 36), cv2.INTER_CUBIC)
im2 = cv2.imread('./app_icon/pacrdt2.png')
im2g = cv2.resize(cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY), (36, 36), cv2.INTER_CUBIC)
im3 = cv2.imread('./app_icon/mny.png')
im3g = cv2.resize(cv2.cvtColor(im3, cv2.COLOR_BGR2GRAY), (36, 36), cv2.INTER_CUBIC)
#black&white convert
(thresh1, bw1) = cv2.threshold(im1g, 128 , 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
(thresh3, bw3) = cv2.threshold(im3g, 128 , 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
(thresh2, bw2) = cv2.threshold(im2g, 128 , 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
#match template
templ_match = cv2.matchTemplate(im1g, im3g, cv2.TM_CCOEFF_NORMED)[0][0]
templ_diff = 1 - templ_match
sample:
edit2: I define icons with different background color or font color as quite similar ones(but viewer will know they are quite same like image 1 and 2 in my sample). That the reason why I input icon picture as black&white. Hope this make sense.
This problem occurs because both the images are of the same size.
I tried out the same approach but using different image sizes. I used to following images:
When I ran the given code for these images it returned an array containing values where each value corresponds to how much the region (of Image) around a certain pixel matches with the template (Template).
Now when you execute cv2.minMaxLoc(templ_match) it returns 4 values:
This is what I got:
Out[32]: (-0.15977318584918976, 1.0, (40, 12), (37, 32))
^ ^ ^ ^
| | | |
min_val max_val min_loc max_loc
This result is observed when the image and template are of different sizes. In your case you have resized all the images to the same size as a result you are only getting a single value which is the first value of templ_match. Moreover you must avoid doing templ_match = cv2.matchTemplate(im1g, im3g, cv2.TM_CCOEFF_NORMED)[0][0]
But rather perform templ_match = cv2.matchTemplate(im1g, im3g, cv2.TM_CCOEFF_NORMED) and then obtain the maximum and minimum value along with their locations using : cv2.minMaxLoc(templ_match)