Given a custom IBM's Visual Recognition Service model trained with a set of images with the size of 100x100 each, could it be better during the classification process to just send 100x100 images, or does the size of the image is not a property that could help yielding better classification results?
When training the model, you want the training images to "represent" the appearance of the images you want to classify later with the trained model.
The trained model does not strongly depend on resolution, though. Internally, the service resizes images to a standard size (224x224 pixels) before training and classification. We don't really recommend manipulating the images before sending to the system, because this detail could change in the future, but currently, you can resize images to exactly 224x224 before sending them and you should not see a change in results.
However, if the objects you are training with take up nearly all of the image, for example, but in the images you try to classify the objects of interest only take up 1/4 of the image and show a lot of background, for example, that can be difficult for the system to classify.
In short, matching the resolution of the training images will not likely improve accuracy. But matching the scale of the objects of interest would (meaning: objects in the training images take up X% of the image, and the same objects in the test images also take up X% of the image).