algorithmmatlabimage-processingcomputer-visioncorner-detection

When implementing a Harris Corner detector, where are the window shifts taken into account?


I was looking at how to implement a Harris Corner detector in MATLAB, and in various online lecture slides, it details the process as follows: enter image description here

However, as I'm understanding it right now, the first couple of steps in this process are for calculating the second moment matrix M. However, as described in the picture below, there are also vectors involving u and v, which are the window being shifted. Where is that being taken into account in the code (for example, in the code shown in the answers here: Implementing a Harris corner detector)

enter image description here

I think I'm just misunderstanding something in how the math translates over to the code here. Also, the pictures of the slides above were taken from here: http://alumni.media.mit.edu/~maov/classes/comp_photo_vision08f/lect/18_feature_detectors.pdf


Solution

  • That description is incomplete and inaccurate.

    When in doubt, always go to the source. In this case, the paper by Harris and Stephens:

    C. Harris and M. Stephens (1988). "A combined corner and edge detector" (PDF). Proceedings of the 4th Alvey Vision Conference. pp. 147–151. http://www.bmva.org/bmvc/1988/avc-88-023.pdf

    (link taken from the Wikipedia article).

    If you read the paper, you'll see that they indeed write

    E(x,y) = (x,y)M(x,y)T

    But you can read the rest of the text on the page that contains that equation to learn that E(x,y) is the change in intensities produced by a small shift (x,y). One eigenvector of M now gives the direction of maximal change, and the eigenvalues of M indicate how strong this change is in that direction and perpendicular to it. (x,y) is no longer relevant, we don't care about any specific shift distances, we just care about how much the signal will change given a small shift in any chosen direction.