By running ffmpeg -h encoder=apng
, I get this:
APNG encoder AVOptions:
-dpi <int> E..V..... Set image resolution (in dots per inch) (from 0 to 65536) (default 0)
-dpm <int> E..V..... Set image resolution (in dots per meter) (from 0 to 65536) (default 0)
-pred <int> E..V..... Prediction method (from 0 to 5) (default none)
none E..V.....
sub E..V.....
up E..V.....
avg E..V.....
paeth E..V.....
mixed E..V.....
What are the differences among these prediction methods specified with -pred
?
I could not find any documentation on ffmpeg.org or anywhere else.
The PNG specification specifies 5 different filter types that are used to make the image data more compressible before it is compressed: none, sub, up, average, and Paeth. The idea with each of the filters is to derive to current pixel from nearby pixels, and then only store how much to adjust that estimate by to get the true value. Each scanline of an image has a filter specified for it. Each filter works best in different circumstances. Filters don't affect the actual image data, only how it is stored.
mixed
isn't a filter, but instead tells ffmpeg to choose the best filter for every line. This is also called dynamic filtering. This makes encoding slower, since 5 different filters need to be tried for every line, but can result in better compression. The best filter is found using the minimum sum of absolute differences method.