I am working on a content-based image retrieval system on a distributed environment, I need an algorithm that takes as input an image and gives me a code for the image, this code should be such that it can be used to match visually similar images.
It depends on kind of images, one workable alternative would be invariant moments ( either hu, or zernike )
We use this method in javaocr library, feel free to grab code from there.
Main advantage of invariant moments with cluster mathing is that it provides distance from cluster centers ( like: this is 90% cucumber but 20% apple )