I read some paper about LSH and I know that is used for solving the approximated k-NN problem. We can divide the algorithm in two parts:
Given a vector in D
dimensions (where D
is big) of any value, translate it with a set of N
(where N<<D
) hash functions to a binary vector in N
dimensions.
Using hamming distance, apply some search technique on the set of given binary codes obtained from phase 1 to find the k-NN.
The keypoint is that computing the hamming distance for vectors in N
dimensions is fast using XOR.
Anyway, I have two questions:
Point 1. is still necessary if we use a binary descriptor, like ORB? Since ORB's descriptors are already binaries and we use the hamming distance for comparing them, why we should perform the first point?
How the conversion happens for images described by SIFT? Each SIFT descriptor is 128 bits and each image is described by a set of descriptors. So we have matrix descX128
(where desc
is the number of descriptors used), while LSH usually accept as input a vector.
1) You can bypass it, but then you will work in D
dimensions, not N
as you say. where N << D
. That means that the algorithm has to adapt to D
dimensions as well.
2) No.
Read SIFT from openCV:
- Keypoint Descriptor
Now keypoint descriptor is created. A 16x16 neighbourhood around the keypoint is taken. It is devided into 16 sub-blocks of 4x4 size. For each sub-block, 8 bin orientation histogram is created. So a total of 128 bin values are available. It is represented as a vector to form keypoint descriptor. In addition to this, several measures are taken to achieve robustness against illumination changes, rotation etc.
Here is how I have it in my mind, hope that will be enough:
LSH takes as input a pointset, of n
points, where every point lies in d
dimensions.
So, a query is a point, in d
dimensions and the goal is to find its NN*.
Now every point, represents an image descriptor. So, we have n
images in our
dataset.
The query, which is also a point (i.e. a vector with d
coordinates), represents another image descriptor.
We are seeking to match (i.e. to find the Nearest Neighbor) the query image descriptor with an image descriptor from our dataset.
So the conversion you are talking about is applied in a vector, not a matrix.
Edit:
Moreover, from our High-dimensional approximate nearest neighbor: k-d Generalized Randomized Forests paper, see this in the Experiments section:
SIFT is a 128-dimensional vector that describes a local image patch by histograms of local gradient orientations.
*or the Fixed-radius near neighbors problem