iosswiftcmusphinxpocketsphinx

How to run wake word detection with pocket sphinx on iOS?


I try to run the wake word detection from pocket sphinx on iOS. As base I used TLSphinx and the speech to text works (not good STT, but it recognizes words).

I extended the decoder.swift by a new function:

public func detectWakeWord (complete: @escaping (Bool?) -> ()) throws {

    ps_set_keyphrase(psDecoder, "keyphrase_search", "ZWEI")
    ps_set_search(psDecoder, "keyphrase_search")
            
    do {
      if #available(iOS 10.0, *) {
          try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .voiceChat, options: [])
      } else {
          try AVAudioSession.sharedInstance().setCategory(.playAndRecord)
      }
    } catch let error as NSError {
        print("Error setting the shared AVAudioSession: \(error)")
        throw DecodeErrors.CantSetAudioSession(error)
    }

    engine = AVAudioEngine()

    let input = engine.inputNode
    let mixer = AVAudioMixerNode()
    let output = engine.outputNode
    engine.attach(mixer)
    engine.connect(input, to: mixer, format: input.outputFormat(forBus: 0))
    engine.connect(mixer, to: output, format: input.outputFormat(forBus: 0))

    // We forceunwrap this because the docs for AVAudioFormat specify that this constructor return nil when the channels
    // are greater than 2.
    let formatIn = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)!
    let formatOut = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 16000, channels: 1, interleaved: false)!
    guard let bufferMapper = AVAudioConverter(from: formatIn, to: formatOut) else {
        // Returns nil if the format conversion is not possible.
        throw DecodeErrors.CantConvertAudioFormat
    }

    mixer.installTap(onBus: 0, bufferSize: 2048, format: formatIn, block: {
        [unowned self] (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in

        guard let sphinxBuffer = AVAudioPCMBuffer(pcmFormat: formatOut, frameCapacity: buffer.frameCapacity) else {
            // Returns nil in the following cases:
            //    - if the format has zero bytes per frame (format.streamDescription->mBytesPerFrame == 0)
            //    - if the buffer byte capacity (frameCapacity * format.streamDescription->mBytesPerFrame)
            //    cannot be represented by an uint32_t
            print("Can't create PCM buffer")
            return
        }

        // This is needed because the 'frameLenght' default value is 0 (since iOS 10) and cause the 'convert' call
        // to faile with an error (Error Domain=NSOSStatusErrorDomain Code=-50 "(null)")
        // More here: http://stackoverflow.com/questions/39714244/avaudioconverter-is-broken-in-ios-10
        sphinxBuffer.frameLength = sphinxBuffer.frameCapacity

        var error : NSError?
        let inputBlock : AVAudioConverterInputBlock = {
              inNumPackets, outStatus in
              outStatus.pointee = AVAudioConverterInputStatus.haveData
              return buffer
          }
        bufferMapper.convert(to: sphinxBuffer, error: &error, withInputFrom: inputBlock)
        print("Error? ", error as Any);
      
        let audioData = sphinxBuffer.toData()
        self.process_raw(audioData)

        print("Process: \(buffer.frameLength) frames - \(audioData.count) bytes - sample time: \(time.sampleTime)")

        self.end_utt()
        
        let hypothesis = self.get_hyp()
          
        print("HYPOTHESIS: ", hypothesis)

        DispatchQueue.main.async {
          complete(hypothesis != nil)
        }
      
        self.start_utt()
    })

    start_utt()

    do {
        try engine.start()
    } catch let error as NSError {
        end_utt()
        print("Can't start AVAudioEngine: \(error)")
        throw DecodeErrors.CantStartAudioEngine(error)
    }
  }

There are not errors, but hypothesis is always nil. My dictionary maps everything to "ZWEI", so the wake word should be detected, if anything is detected.

ZWEI AH P Z EH TS B AAH EX
ZWEI(2) HH IH T
ZWEI(3) F EH EX Q OE F EH N T L IH CC T
ZWEI(4) G AX V AH EX T AX T
...
ZWEI(12113) N AY NZWO B IIH T AX N

Does someone know why hypothesis is always nil?


Solution

  • I had to run self.get_hyp() before self.end_utt().

    I'm not sure why, but it is different from speech to text calling order.

    Edit

    Another tip: For better wake word detection quality increase the buffer size for the microphone input. E.g.:

    mixer.installTap(onBus: 0, bufferSize: 8192, format: formatIn, block: [...]