androidandroid-studiohuawei-mobile-serviceshuawei-developershuawei-ml-kit

Huawei ML Kit Text to Speech Conversion Error


I m working on the Translator application where I need to speak out what user has translated. By following the Huawei Text to Speech Doc I m getting the Error.

onError: MLTtsError{errorId=11301, errorMsg='The speaker is not supported. ', extension=7002}

 protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_speak_and_translate);

        showInterstitialAd();
        MLApplication.getInstance().setApiKey("Your Key");


        if (deviceManufacture.equalsIgnoreCase("Huawei")) {
            setUpHuaweiTts();
        } 
        
    }
    private void setUpHuaweiTts() {
        mlTtsConfig = new MLTtsConfig()
                // Set the text converted from speech to English.
                // MLTtsConstants.TtsEnUs: converts text to English.
                // MLTtsConstants.TtsZhHans: converts text to Chinese.
                .setLanguage(MLTtsConstants.TTS_EN_US)
                // Set the English timbre.
                // MLTtsConstants.TtsSpeakerFemaleEn: Chinese female voice.
                // MLTtsConstants.TtsSpeakerMaleZh: Chinese male voice.
                .setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH)
                // Set the speech speed. Range: 0.2–1.8. 1.0 indicates 1x speed.
                .setSpeed(1.0f)
                // Set the volume. Range: 0.2–1.8. 1.0 indicates 1x volume.
                .setVolume(1.0f);
        mlTtsEngine = new MLTtsEngine(mlTtsConfig);
        mlTtsEngine.setTtsCallback(new MLTtsCallback() {
            @Override
            public void onError(String s, MLTtsError mlTtsError) {
                Log.d(TAG, "onError: "+ mlTtsError);
            }

            @Override
            public void onWarn(String s, MLTtsWarn mlTtsWarn) {
                Log.d(TAG, "onWarn: ");
            }

            @Override
            public void onRangeStart(String s, int i, int i1) {
                Log.d(TAG, "onRangeStart: ");
            }

            @Override
            public void onAudioAvailable(String s, MLTtsAudioFragment mlTtsAudioFragment, int i, Pair<Integer, Integer> pair, Bundle bundle) {
                Log.d(TAG, "onAudioAvailable: ");
            }

            @Override
            public void onEvent(String s, int i, Bundle bundle) {
                // Callback method of a TTS event. eventId indicates the event name.
                switch (i) {
                    case MLTtsConstants.EVENT_PLAY_START:
                        Log.d(TAG, "onEvent: Play");
                        // Called when playback starts.
                        break;
                    case MLTtsConstants.EVENT_PLAY_STOP:
                        // Called when playback stops.
                        boolean isInterrupted = bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED);
                        Log.d(TAG, "onEvent: Stop");
                        break;
                    case MLTtsConstants.EVENT_PLAY_RESUME:
                        // Called when playback resumes.
                        Log.d(TAG, "onEvent: Resume");      
                        break;
                    case MLTtsConstants.EVENT_PLAY_PAUSE:
                        // Called when playback pauses.
                        Log.d(TAG, "onEvent: Pause");
                        break;

                    // Pay attention to the following callback events when you focus on only synthesized audio data but do not use the internal player for playback:
                    case MLTtsConstants.EVENT_SYNTHESIS_START:
                        // Called when TTS starts.
                        Log.d(TAG, "onEvent: SynStart");
                        break;
                    case MLTtsConstants.EVENT_SYNTHESIS_END:
                        // Called when TTS ends.
                        Log.d(TAG, "onEvent: SynEnd");
                        break;
                    case MLTtsConstants.EVENT_SYNTHESIS_COMPLETE:
                        // TTS is complete. All synthesized audio streams are passed to the app.
                        boolean isInterruptedCheck = bundle.getBoolean(MLTtsConstants.EVENT_SYNTHESIS_INTERRUPTED);
                        Log.d(TAG, "onEvent: SynComplete");
                        break;
                    default:
                        break;
                }
            }
        });
       mlTtsEngine.speak("test", MLTtsEngine.QUEUE_APPEND);
    }

Currently, I m just setting string "test" for test purpose. I have to get the text from Model and set it for the speaking. I couldn't see anything like that in Doc regarding speaker Error.I have searched For the ErrorCode on Huawei.

public static final int ERR_ILLEGAL_PARAMETER

Invalid parameter.

Constant value: 11301

LogCat: Debug enter image description here

LogCat: Error enter image description here Any help would be really appreciated. Thanks.


Solution

  • I have set the wrong person with English language. So changing this code of Line

    .setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH)
    

    with

    .setPerson(MLTtsConstants.TTS_SPEAKER_MALE_EN)
    

    works perfectly fine.