I was wondering if it is possible to use what Apple uses for Siri and Dictation in my own IOS app.
If this is possible, then how would I do it?
If Apple uses a third party to transcribe audio files, then what is it? Does it have an API?
If Apple does their own transcribing and I cannot use it then... well then that's too bad.
All responses are greatly appreciated!
After the user taps the microphone button on a system-provided keyboard and speaks, you can access the result by implementing the dictation-related methods of the UITextInput
protocol. The iOS SDK doesn't provide any other access to the speech recognition system. You can't feed it your own audio file for interpretation.
The OS X SDK provides an NSSpeechRecognizer
class. Maybe someday that will come to iOS.