iosspeech-recognitionopenears

How to detect phrase?


i am implementing speech to text by OpenEars feature in my app. i am also using Rejecto plugin to make the recognition better and RapidEars for faster results. the goal is to detect phrase and single words, for example :

    lmGenerator = [[LanguageModelGenerator alloc] init];

NSArray *words = [NSArray arrayWithObjects:@"REBETANDEAL",@"NEWBET",@"REEEBET", nil];
NSString *name = @"NameIWantForMyLanguageModelFiles";
NSError *err = [lmGenerator generateRejectingLanguageModelFromArray:words
                                                     withFilesNamed:name
                                             withOptionalExclusions:nil
                                                    usingVowelsOnly:FALSE
                                                         withWeight:nil
                                             forAcousticModelAtPath:[AcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish Rejecto model.
// Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish language model instead of an English one.

NSDictionary *languageGeneratorResults = nil;

NSString *lmPath = nil;
NSString *dicPath = nil;

if([err code] == noErr) {

    languageGeneratorResults = [err userInfo];

    lmPath = [languageGeneratorResults objectForKey:@"LMPath"];
    dicPath = [languageGeneratorResults objectForKey:@"DictionaryPath"];

} else {
    NSLog(@"Error: %@",[err localizedDescription]);
}





// Change "AcousticModelEnglish" to "AcousticModelSpanish" to perform Spanish recognition instead of English.
[self.pocketsphinxController setRapidEarsToVerbose:FALSE]; // This defaults to FALSE but will give a lot of debug readout if set TRUE
[self.pocketsphinxController setRapidEarsAccuracy:10]; // This defaults to 20, maximum accuracy, but can be set as low as 1 to save CPU
[self.pocketsphinxController setFinalizeHypothesis:TRUE]; // This defaults to TRUE and will return a final hypothesis, but can be turned off to save a little CPU and will then return no final hypothesis; only partial "live" hypotheses.
[self.pocketsphinxController setFasterPartials:TRUE]; // This will give faster rapid recognition with less accuracy. This is what you want in most cases since more accuracy for partial hypotheses will have a delay.
[self.pocketsphinxController setFasterFinals:FALSE]; // This will give an accurate final recognition. You can have earlier final recognitions with less accuracy as well by setting this to TRUE.
[self.pocketsphinxController startRealtimeListeningWithLanguageModelAtPath:lmPath dictionaryAtPath:dicPath acousticModelAtPath:[AcousticModel pathToModel:@"AcousticModelEnglish"]]; // Starts the rapid recognition loop. Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to perform Spanish language recognition.


[self.openEarsEventsObserver setDelegate:self];

most of the time the result is fine, but sometime it makes a mix from separate strings objects. for example i pass the words array : @[@"ME AND YOU",@"YOU",@"ME"] and the output can be : "YOU ME ME ME AND". i dont want it to recognize only part of a phrase. any ideas please?


Solution

  • On the pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID you can check if the hypothesis is in your words array before showing it.

    - (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
                if ([words containsObject:hypothesis]) {
                      //show hypothesis
                }           
    }