Core video experts, I'm creating a custom video player for .mov files. I have the .mov parser working and using the QTCoreVideo101 sample I am trying to play video.
The problem I have is the display link getFrameForTime I don't know how the time values can be used to find the correct frame.
The values contained in CVTimeStamp don't make any sense to me. Below is a sample of the values requested for a 1 second video. Can anyone explain how I use these values to find the correct frame in the .mov file?
First three requests - value of CVTimeStamp
video time: 489150134353920.000000 hostTime: 2026048145326080.000000 videoTimeScale: 241500000.000000 rateScalar: 1.000000 videoRefreshPeriod: 4028320.000000
video Time: 489150201462784.000000 hostTime: 2026048279543808.000000 videoTimeScale: 241500000.000000 rateScalar: 0.999985 videoRefreshPeriod: 4028320.000000
video Time: 489156643913728.000000 hostTime: 2026074988871680.000000 videoTimeScale: 241500000.000000 rateScalar: 1.000000 videoRefreshPeriod: 4028320.000000
CVTimeStamp
s are explained in the CVTimeStamp Reference Document. The videoTimeScale
is the number of units that a second is divided into. So for 30 fps video, it would need to be at least 30 (though it could be any multiple of 30 - 60, 120, 30000, etc.). The videoTime
is the time in the timescale where the current frame (or field) starts. So if your timebase is 30000, and you're on the 15th frame, your videoTimeScale
would be 30000, and your videoTime
would be 15000.
You can check that you've interpreted the value correctly by checking the smpteTime
field and seeing if it matches what you expect. In the example above, it would be 0 hours, 0 minutes, 0 seconds, 15 frames (or 00:00:00:15).
Is there a reason why you can just use the OS's built-in video decoding facilities?