iosaccessibilityvoiceaudio

How to programmatically use iOS voice synthesizers? (text to speech)


iOS devices have embedded voice synthesizers for Accessibility's VoiceOver feature. Is there a way you can use these synthesizers programmatically to generate text-based sounds?

My problem is: I'm working on a simple app for kids to learn colors and rather than recording the names of the colors in each language i want to support and storing them as audio files, i'd rather generate the sounds at runtime with some text-to-speech feature.

Thanks

[EDIT: this question was asked pre-iOS7 so you should really consider the voted answer and ignore older ones, unless you're a software archeologist]


Solution

  • Starting from iOS 7, Apple provides this API.

    Objective-C

    #import <AVFoundation/AVFoundation.h>
    …
    AVSpeechUtterance *utterance = [AVSpeechUtterance 
                                speechUtteranceWithString:@"Hello World!"];
    AVSpeechSynthesizer *synth = [[AVSpeechSynthesizer alloc] init];
    [synth speakUtterance:utterance];
    

    Swift

    import AVFoundation
    …
    let utterance = AVSpeechUtterance(string: "Hello World!")
    let synth = AVSpeechSynthesizer()
    synth.speakUtterance(utterance)