lossless-compressionlz4lz77

Difference: LZ77 vs. LZ4 vs. LZ4HC (compression algorithms)?


I understand the LZ77 and LZ78 algorithms. I read about LZ4 here and here and found code for it.

Those links described the LZ4 block format. But it would be great if someone could explain (or direct me to some resource explaining):


Solution

  • LZ4 is built to compress fast, at hundreds of MB/s per core. It's a fit for applications where you want compression that's very cheap: for example, you're trying to make a network or on-disk format more compact but can't afford to spend a bunch of CPU time on compression. It's in a family with, for example, snappy and LZO.

    The natural comparison point is zlib's DEFLATE algorithm, which uses LZ77 and Huffman coding and is used in gzip, the .ZIP and .PNG formats, and too many other places to count.

    These fast compressors differ because:

    1. They use repetition-detection code that's faster (often a simple hashtable with no collision detection) but doesn't search through multiple possible matches for the best one (which would take time but result in higher compression), and can't find some short matches.
    2. They only try to compress repetitions in input--they don't try to take advantage of some bytes being more likely than others outside of repetitions.
    3. Closely related to 2, they generate bytes of output at a time, not bits; allowing fraction-of-a-byte codes would allow for more compression sometimes, but would require more CPU work (often bit-shifting and masking and branching) to encode and decode.
    4. Lots of practical work has gone into making their implementations fast on modern CPUs.

    By comparison, DEFLATE gets better compression but compresses and decompresses slower, and high-compression algorithms like LZMA, bzip2, LZHAM, or brotli tend to take even more time (though Brotli at its faster settings can compete with zlib). There's a lot of variation among the high-compression algorithms, but broadly, they tend to capture redundancies over longer distances, take more advantage of context to determine what bytes are likely, and use more compact but slower ways to express their results in bits.

    LZ4HC is a "high-compression" variant of LZ4 that, I believe, changes point 1 above--the compressor finds more than one match between current and past data and looks for the best match to ensure the output is small. This improves compression ratio but lowers compression speed compared to LZ4. Decompression speed isn't hurt, though, so if you compress once and decompress many times and mostly want extremely cheap decompression, LZ4HC would make sense.

    Note that even a fast compressor might not allow one core to saturate a large amount of bandwidth, like that provided by SSDs or fast in-datacenter links. There are even quicker compressors with lower ratios, sometimes used to temporarily pack data in RAM. WKdm and Density are two such compressors; one trait they share is acting on 4-byte machine words of input at a time rather than individual bytes. Sometimes specialized hardware can achieve very fast compression, like in Samsung's Exynos chips or Intel's QuickAssist technology.

    If you're interested in compressing more than LZ4 but with less CPU time than deflate, the author of LZ4 (Yann Collet) wrote a library called Zstd--here's a blog post from Facebook at its stable release, background on the finite state machines used for compactly encoding info on repetitions, and a detailed description in an RFC. Its fast modes could work in some LZ4-ish use cases. (Also, Apple developed lzfse on similar principles, and Google developed gipfeli as a 'medium' packer. Neither seemed to get much use in the outside world.) Also, a couple projects aim to provide faster/lighter DEFLATE: SLZ, patches to zlib by CloudFlare and Intel. (There's also been work on fast decompression on large modern CPU cores.)

    Compared to the fastest compressors, those "medium" packers add a form of entropy encoding, which is to say they take advantage of how some bytes are more common than others and (in effect) put fewer bits in the output for the more common byte values.

    If you're compressing one long stream, and going faster using more cores is potentially helpful, parallel compression is available for gzip through pigz and the zstd through the command-line tool's -T option (and in the library). (There are various experimental packers out there too, but they exist more to push boundaries on speed or density, rather than for use today.)

    So, in general, you have a pretty good spectrum of alternative compressors for different apps:

    As you move from LZ4 through DEFLATE to brotli you layer on more effort to predict and encode data and get more compression out at the cost of some speed.

    As an aside, algorithms like brotli and zstd can generally outperform gzip--compress better at a given speed, or get the same compression faster--but this isn't actually because zlib did anything wrong. The main secret is probably that newer algos use megabytes of memory as history windows, not kilobytes. The DEFLATE format dates back to 1991. At that time RAM cost thousands of times more what does now, and systems had much less, so you can see why you might have chosen to design your compression format to need less of it. Changes in CPUs over time may also be a factor: tons of of arithmetic (as used in finite state machines) is relatively cheaper than it used to be, and unpredictable ifs (branches) are relatively more expensive.