We are doing a project which user's answers are saved as .wav files and evaluated after. We have created grammars for each and every question. There are two questions we are having a recognition problem. Problems are probably the same since user must speak approximately 7-8 seconds for both of these questions.
This is the grammar file that we are using for one of the questions;
#JSGF V1.0;grammar Question8; public <Question8> = ( one hundred | ninety three | eighty six | seventy nine | seventy two | sixty five) * ;
Here, user must count numbers backwards by 7s. It recognizes fine if I speak too fast. When I speak slowly, for instance after saying "one hundred" and wait for 1 second and carry on until sixty five like this, it will only recognize one hundred and it won't recognize other words.
Two main parts are responsible for these processes:
The class that we created for microphone;
public final class SpeechRecorder {
static Configuration configuration = new Configuration();
static Microphone mic = new Microphone(16000, 16, 1, true, true, false, 10, true, "average", 0, "default", 6400);
public static void startMic() {
mic.initialize();
mic.startRecording();
mic.getAudioFormat();
mic.getUtterance();
System.out.println("Audio Format is" + mic.getAudioFormat());
}
public static void stopMic(String questionName) {
mic.stopRecording();
Utterance u = mic.getUtterance();
try {
u.save("Resources/Answers/" + questionName + ".wav", AudioFileFormat.Type.WAVE);
} catch (IOException e) {
e.printStackTrace();
}
}
public static String getAnswersOfSpeech(String question) throws IOException {
Evaluation.disableLogMessages();
configuration.setAcousticModelPath("resource:/edu/cmu/sphinx/models/en-us/en-us");
configuration.setDictionaryPath("resource:/edu/cmu/sphinx/models/en-us/cmudict-en-us.dict");
configuration.setGrammarPath("resource:/Grammer");
configuration.setGrammarName(question);
configuration.setUseGrammar(true);
StreamSpeechRecognizer recognizer = new StreamSpeechRecognizer(configuration);
recognizer.startRecognition(new FileInputStream("Resources/Answers/" + question + ".wav"));
SpeechResult Result = recognizer.getResult();
String speechWords = Result.getHypothesis();
return speechWords;
}
public static String getSavedAnswer(int question) {
return User.getAnswers(question);
}
}
This is where we save user's answer as .wav files into our resources.
btn_microphone.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
click++;
if (click % 2 == 1) {
SpeechRecorder.startMic();
btn_microphone.setIcon(new ImageIcon("Resources/Images/record.png"));
} else {
SpeechRecorder.stopMic("Question" + Integer.toString(question));
btn_Next.setVisible(true);
btn_microphone.setIcon(new ImageIcon("Resources/Images/microphone.png"));
lbl_speechAnswer.setVisible(true);
try {
userAnswer = SpeechRecorder.getAnswersOfSpeech("Question" + Integer.toString(question));
} catch (IOException e1) {
e1.printStackTrace();
}
if (userAnswer.equals("")) {
lbl_speechAnswer.setText(
"<html>No answer was given, click on microphone button to record again</html>");
} else {
lbl_speechAnswer.setText("<html>Your answer is " + userAnswer
+ ", click on microphone button to record again</html>");
}
}
}
});
I don't how can we overcome this problem. I would be so grateful if anyone could help me.
You need a loop as in transcriber demo:
while ((result = recognizer.getResult()) != null) {
System.out.format("Hypothesis: %s\n", result.getHypothesis());
}
recognizer.stopRecognition();