We've gone through an extensive exercise comparing facial recognition/matching providers using our local facial image data sets.
MS Cognitive
services came out tops in terms of False Rejects Rate (FRR)
for a given False Accept Rate (FAR)
. We are busy deciding on pass thresholds for different image type matching (selfie vs document, etc.).
The question is, if we are using a specific version (https://{endpoint}/face/v1.0/
) and fixed parameters for the Detect
and Verify
endpoints (recognitionModel = recognition_02
and detectionModel = detection_02
), can we expect to see a change in the confidence score for the same two images over time or whenever Microsoft releases a new version?
Our concern is that we pick a pass threshold based on our test results and current confidence scores, and then the scores change in future due to machine-learning/releases, meaning we would continuously have to re-adjust our thresholds.
I think it's a good question about the stability of model function of MS Azure Cognitive Services like Face API. Based on my knowledge for Machine Learning, there are some possible reason that will cause the issue as you said, as below.
Sure, I think the two above will absolute possibly happen. However, there are three reason let me believe that will not effect yours too much.
Considering for the worst case, technically speaking, there are many opensource face recognization solutions as the backup for you. It's nothing really matter.