image-processingopticalflowvideo-codecs

Why don't we use motion vector data from video in optical flow?


All the implementation i saw in optical flow in opencv uses video as an array of frames and then implement optical flow on each image. That involves slicing the image into NxN block and searching for velocity vector.

Although motion vector in video codec is misleading and it does not necessarily contain motion information, why don't we use it to check which block likely has motion and then run optical flow on those blocks ? Shouldn't that fasten the process ?


Solution

  • OpenCV is a universal image processing framework. It takes in frames, not compressed video, for its algorithms.

    You can certainly write a video decoder that also hands out info about displacement from the codec to openCV – but that will be very codec-specific and thus isn't in scope of openCV itself.