iosamazon-web-servicesswift3aws-sdkamazon-rekognition

How to use AWS Rekognition to detect Image Labels and Faces in Swift 3


I've been trying to use the AWSRekognition SDK to detect faces and labels in images. However, Amazon has no Documentation on how to integrate their SDK with iOS. They have links that show how to work with Rekognition (Developer Guide), but the examples are only in Java and very limited.

Amazon Rekognition Developer Guide

If you click on their "iOS Documentation," you are taken to the general iOS documentation page, which does not show any signs of Rekognition in any section.

AWS iOS Developer Guide

I wanted to know if anyone could integrate AWS Rekognition in Swift 3. How to Initialize it and request with an image, receiving a response with the labels.

I already downloaded the AWSRekognition.framework and the AWSCore.framework and added them to my project. I have also imported both of them into my AppDelegate.swift and initialized my AWS Credentials.

let credentialsProvider = AWSCognitoCredentialsProvider(
        regionType: AWSRegionType.usEast1,
        identityPoolId: "us-east-1_myPoolID")
let configuration = AWSServiceConfiguration(
        region: AWSRegionType.usEast1,
        credentialsProvider: credentialsProvider)
AWSServiceManager.default().defaultServiceConfiguration = configuration

Also, I've tried to initialize Rekognition and build a Request:

do {
    
    let rekognitionClient:AWSRekognition = AWSRekognition(forKey: "Maybe a Key from AWS?")

    let request: AWSRekognitionDetectLabelsRequest = try AWSRekognitionDetectLabelsRequest(dictionary: ["image": UIImage(named:"TestImage")!, "maxLabels":3, "minConfidence":90], error: (print("error")))
    rekognitionClient.detectLabels(request) { (response:AWSRekognitionDetectLabelsResponse?, error:Error?) in
        if error == nil {
            print(response!)
        }
    }

} catch {
    print("Error")
}

Thanks a lot!


Solution

  • The documentation on the web for the Rekognition iOS SDK is lacking but the comments in the SDK code were pretty helpful for me. If you hold Cmd while clicking on a keyword in Xcode you should be able to find all the info you need in the comments.

    From this you can see that the Key is referring to a previously registered client which you can do with registerRekognitionWithConfiguration, but you can skip all that by using the default as Karthik mentioned:

    let rekognitionClient = AWSRekognition.defaultRekognition()
    

    I have been working with face detection so I haven't used AWSRekognitionDetectLabelsRequest in my own code, but I think where you might be going wrong is that the image property of AWSRekognitionDetectLabelsRequest should be an AWSRekognitionImage and not a UIImage like you are passing in. You can call UIImageJPEGRepresentation to get the raw bytes from a UIImage.

    let sourceImage = UIImage(named: "TestImage")
    
    let image = AWSRekognitionImage()
    image!.bytes = UIImageJPEGRepresentation(sourceImage!, 0.7)
    
    guard let request = AWSRekognitionDetectLabelsRequest() else {
        puts("Unable to initialize AWSRekognitionDetectLabelsRequest.")
        return
    }
    
    request.image = image
    request.maxLabels = 3
    request.minConfidence = 90
    

    It should also be a lot easier to debug if you set the request properties individually like this too.