I'm using the front-facing camera on an iPhone X to capture depth data (using the IR true depth camera).
I'm able to quite easily extract the depthData from the photo that I take, but that data appears to be limited to 8-bit values per data point, even though I believe I am asking for 32-bit precision. See below:
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let depthData = photo.depthData
if (depthData != nil) {
print("got some depth data")
let depthData = (depthData?.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32))
}
}
The depthData object in the snippet above contains only 8-bit values. Is there greater precision available from the AVDepthData?
I haven't tried, so I'm not sure if you can get a "deeper" depth format out of the TrueDepth camera.
If you can, however, converting the depth data that comes out of the capture isn't the way to do it. depthData.converting(toDepthDataType:)
is analogous to converting scalar types. For example, if you have a value of type Float
, the extra decimal places you gain by converting it to Double
are all zeros — your existing measurement hasn't gained any precision.
The way you specify depth capture formats is before capture. Set your capture device's activeDepthDataFormat
to one of the supportedDepthDataFormats
compatible with its current activeFormat
. The values you find in supportedDepthDataFormats
will tell you what types / precision of depth data your capture device is capable of recording.