in many programs like https://gist.github.com/kkomakkoma/4fb21b853ce774fe5c6d47e9626e6570
design gabor filters in this way:
def build_filters():
filters = []
ksize = 31
for theta in np.arange(0, np.pi, np.pi / 32):
params = {'ksize':(ksize, ksize), 'sigma':1.0, 'theta':theta, 'lambd':15.0,
'gamma':0.02, 'psi':0, 'ktype':cv2.CV_32F}
kern = cv2.getGaborKernel(**params)
kern /= 1.5*kern.sum() #why? why? why?
filters.append((kern,params))
return filters
what does {kern /= 1.5*kern.sum()} do ? thanks for your anwser
I will try my best to answer this, since I am dealing with this as well.
Firstly, I think this is a somewhat related question: gabor edge detection with OpenCV
Doing this operation results in a kind of averaging operation (similar to just using an averaging mask to convolve an image), so in a way it provides some smoothing. It is also used to normalize the kernel, as stated in my link above (to prevent the response of certain pixels from far outweighing other pixels when convolution is done). If it is not done, then it is possible that the maximum value of the kernel can be several orders of magnitude larger than the minimum value.
I tested this on an input image with and without this line of code, after using openCV's filter2D function, like in the github link you posted (the output image was scaled to be between 0 and 255), and the results were that without this line of code, many pixels just shot right up to an intensity of 255, which is expected since each pixel just got assigned the maximum value of the kernel.
I hope this helps...if anyone else has any more reasoning or info on this, I would really like to know!