swiftavfoundationcore-audioavassetwriteravassetreader

After compressing my audio file, why can I not play the file?


Audio file will not play after reducing it using AVAssetReader/ AVAssetWriter

At the moment, the whole function is being executed fine, with no errors thrown. For some reason, when I go inside the document directory of the simulator via terminal, the audio file will not play through iTunes and comes up with error when trying to open with quicktime "QuickTime Player can't open "test1.m4a"

Does anyone specialise in this area and understand why this isn't working?

protocol FileConverterDelegate {
  func fileConversionCompleted()
}

class WKAudioTools: NSObject {

  var delegate: FileConverterDelegate?

  var url: URL?
  var assetReader: AVAssetReader?
  var assetWriter: AVAssetWriter?

  func convertAudio() {

    let documentDirectory = try! FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
    let exportURL = documentDirectory.appendingPathComponent(Assets.soundName1).appendingPathExtension("m4a")

    url = Bundle.main.url(forResource: Assets.soundName1, withExtension: Assets.mp3)

    guard let assetURL = url else { return }
    let asset = AVAsset(url: assetURL)

    //reader
    do {
      assetReader = try AVAssetReader(asset: asset)
    } catch let error {
      print("Error with reading >> \(error.localizedDescription)")
    }

    let assetReaderOutput = AVAssetReaderAudioMixOutput(audioTracks: asset.tracks, audioSettings: nil)
    //let assetReaderOutput = AVAssetReaderTrackOutput(track: track!, outputSettings: nil)

    guard let assetReader = assetReader else {
      print("reader is nil")
      return
    }

    if assetReader.canAdd(assetReaderOutput) == false {
      print("Can't add output to the reader ☹️")
      return
    }

    assetReader.add(assetReaderOutput)

    // writer
    do {
      assetWriter = try AVAssetWriter(outputURL: exportURL, fileType: .m4a)
    } catch let error {
      print("Error with writing >> \(error.localizedDescription)")
    }

    var channelLayout = AudioChannelLayout()

    memset(&channelLayout, 0, MemoryLayout.size(ofValue: channelLayout))
    channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo

    // use different values to affect the downsampling/compression
    let outputSettings: [String: Any] = [AVFormatIDKey: kAudioFormatMPEG4AAC,
                                         AVSampleRateKey: 44100.0,
                                         AVNumberOfChannelsKey: 2,
                                         AVEncoderBitRateKey: 128000,
                                         AVChannelLayoutKey: NSData(bytes: &channelLayout, length:  MemoryLayout.size(ofValue: channelLayout))]

    let assetWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: outputSettings)

    guard let assetWriter = assetWriter else { return }

    if assetWriter.canAdd(assetWriterInput) == false {
      print("Can't add asset writer input ☹️")
      return
    }

    assetWriter.add(assetWriterInput)
    assetWriterInput.expectsMediaDataInRealTime = false

    // MARK: - File conversion
    assetWriter.startWriting()
    assetReader.startReading()

    let audioTrack = asset.tracks[0]

    let startTime = CMTime(seconds: 0, preferredTimescale: audioTrack.naturalTimeScale)

    assetWriter.startSession(atSourceTime: startTime)

    // We need to do this on another thread, so let's set up a dispatch group...
    var convertedByteCount = 0
    let dispatchGroup = DispatchGroup()

    let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
    //... and go
    dispatchGroup.enter()
    assetWriterInput.requestMediaDataWhenReady(on: mediaInputQueue) {
      while assetWriterInput.isReadyForMoreMediaData {
        let nextBuffer = assetReaderOutput.copyNextSampleBuffer()

        if nextBuffer != nil {
          assetWriterInput.append(nextBuffer!)  // FIXME: Handle this safely
          convertedByteCount += CMSampleBufferGetTotalSampleSize(nextBuffer!)
        } else {
          // done!
          assetWriterInput.markAsFinished()
          assetReader.cancelReading()
          dispatchGroup.leave()

          DispatchQueue.main.async {
            // Notify delegate that conversion is complete
            self.delegate?.fileConversionCompleted()
            print("Process complete 🎧")

            if assetWriter.status == .failed {
              print("Writing asset failed ☹️ Error: ", assetWriter.error)
            }
          }
          break
        }
      }
    }
  }
}

Solution

  • You need to call finishWriting on your AVAssetWriter to get the output completely written:

    assetWriter.finishWriting {
        DispatchQueue.main.async {
            // Notify delegate that conversion is complete
            self.delegate?.fileConversionCompleted()
            print("Process complete 🎧")
    
            if assetWriter.status == .failed {
                print("Writing asset failed ☹️ Error: ", assetWriter.error)
            }
        }
    }
    

    If exportURL exists before you start the conversion, you should remove it, otherwise the conversion will fail:

    try! FileManager.default.removeItem(at: exportURL)
    

    As @matt points out, why the buffer stuff when you could do the conversion more simply with an AVAssetExportSession, and also why convert one of your own assets when you could distribute it already in the desired format?