I'm currently working on comparing objects in different angles for image detection. Basically, I want to know whether the object from image 1 is similar with object from image 2 (% of similarity would be great).
Image1:
Image2:
I have already looked around on the Internet and it seems like ASIFT (LINK) is a great solution. However, when I implement their demo and rerun the demo multiple times with the same inputs, ASIFT gives out different results on matched vertices.
Why does ASIFT give out different results each time I rerun the demo with the same inputs?
PS:
Some comments regarding alternative solutions like ASIFT or SIFT for comparing objects in a different angle (having a more consistent result) would be appreciated as well.
It is not ASIFT or better-ASIFT problem. Basically, ASIFT solves "Wide baseline stereo" problem - find correspondences and geometrical transformations between different views of the SAME object or scene.
What you looking for is some kind of image (object) similarity. State-of-art method for this - train neural net, get fixed length descriptor of image from it and compare descriptors with Eucledian distance between them
For example, have a look into "Neural Codes for Image Retrieval" paper - http://arxiv.org/abs/1404.1777
P.S. If you still need correspondences and gave us different glasses by mistake, you can try MODS http://cmp.felk.cvut.cz/wbs/index.html Difference from ASIFT that it could handle much bigger angluar differences, more stable and much faster.