androidtensorflowmachine-learningvision-apifacial-identification

Tensorflow Facial Identification at Runtime for android


My goal would be creating an app that can have multiple users. Each user account must be secured with facial identification of the app. I know I might not get the concept right for tensorflow, but is their a way in android that we can train the app to identify someone's face as to who this user is? Im under the impression that we have to create a training model beforehand and apply it on the app, but as for my goal the app will have to dynamically train to identify who are its users.Thanks in advance.


Solution

  • I'm not sure if this is the right way to do this. I know that it can be achieved by Eigenfaces, but I never tried it, so maybe you want to take that in consideration too.

    Coming back to your idea. I don't know what are the odds for success, but it happens that I know a few places where you'll meet a lot of challenges:

    1. Dataset. For each face that you want to recognise you will need a lot of images from different angles and as different as possible (with glasses, different haircuts, beard, makeup, different lightning conditions, etc). If you fail to provide a detailed dataset, two things might happen: either a face that should be recognised will not be or the face shouldn't, but it is recognised in the end. A dataset like this is hard to create because you will have in the best case a few photos of the user that registers the face. I think that with this photos, you can work to generate new photos in different conditions, but this cannot be done on mobile.
    2. Assuming that you have a decent dataset, now you have to train the network. Here you have two options: built your model from the ground (not such a good idea) or use a model provided by Google and retrain only the final layer from the network. As far as I know, TensorFlow doesn't have an option to do the training on mobile (it would be to expensive for the system), so you'll have to train the model somewhere and then download it on device. TensorFlow has a model MobileNet that is designed to be used on mobile devices, being a good starting point for your network, having a good accuracy and not using many system resources. You can also try with Inception, but this model is designed for accuracy, has a much longer training time and it spend more time and resources while evaluating an image.

    The end scenario for your app is like this: a user registers his face by taking a few photos that are sent to your server. You then have to retrain the network each time a new face is added and downloaded the model inside your app. From here, things are easy, take a photo of the user and hope that its face is handled properly.

    Maybe you want to have a look at some codelabs about TensorFlow that teach you on how to train the model and run it on Android.