234 lines
25 KiB
Markdown
234 lines
25 KiB
Markdown
# Compression
|
|
|
|
Compression means encoding [data](data.md) (such as images or texts) in a different way so that the data takes less space (memory) while keeping all the important [information](information.md), or, in plain terms, it usually means "making files smaller". Compression is pretty important so that we can utilize memory or [bandwidth](bandwidth.md) well -- without it our hard drives would be able to store just a handful of videos, internet would be slow as hell due to the gigantic amount of transferred data and our [RAM](ram.md) wouldn't suffice for things we normally do. There are many [algorithms](algorithm.md) for compressing various kinds of data, differing by their complexity, performance, efficiency of compression etc. The reverse process to compression (getting the original data back from the compressed data) is called **decompression**. The ratio of the compressed data size to the original data size is called **compression ratio** (the lower, the better). The science of data compression is truly huge and complicated AF, here we'll just mention some very basics. Also watch out: compression algorithms are often a [patent](patent.md) mine field.
|
|
|
|
{ I've now written a tiny LRS compression library/utility called [shitpress](shitpress.md), check it out at https://codeberg.org/drummyfish/shitpress. It's fewer than 200 LOC, so simple it can nicely serve educational purposes. The principle is simple, kind of a dictionary method, where the dictionary is simply the latest output 64 characters; if we find a long word that occurred recently, we simply reference it with mere 2 bytes. It works relatively well for most data! ~drummyfish }
|
|
|
|
{ There is a cool compressing competition known as Hutter Prize that offers 500000 pounds to anyone who can break the current record for compressing [Wikipedia](wikipedia.md). Currently the record is at compressing 1GB down to 115MB. See http://prize.hutter1.net for more. ~drummyfish }
|
|
|
|
{ [LMAO](lmao.md) retard [patents](patent.md) are being granted on impossible compression algorithms, see e.g. http://gailly.net/05533051.html. See also [Sloot Digital Coding System](sloot.md), a miraculous compression algorithm that "could store a whole movie in 8 KB" lol. ~drummyfish }
|
|
|
|
Let's keep in mind compression is not applied just to files on hard drives, it can also be used e.g. in RAM to utilize it more efficiently.
|
|
|
|
Why don't we compress everything? Firstly because compressed data is slow to work with, it requires significant CPU time to compress and decompress data, it's a kind of a space-time tradeoff (we gain more storage space for the cost of CPU time). Compression also [obscures](obscurity.md) data, for example compressed text file will typically no longer be human readable, any code wanting to work with such data will have to include the nontrivial decompression code. Compressed data is also more prone to [corruption](corruption.md) because redundant information (which can help restoring corrupted data) is removed from it -- in fact we sometimes purposefully do the opposite of compression and make our data bigger to protect it from corruption (see e.g. [error correcting](error_correction.md) codes, [RAID](raid.md) etc.). And last but not least, many data can hardly be compressed or are so small it's not even worth it.
|
|
|
|
The basic division of compression methods is to:
|
|
|
|
- **lossless**: No information contained in the original data will be lost in the compressed data, i.e. the original file can be restored in its entirety from the compressed file.
|
|
- **lossy**: Some information contained in the original data is lost during compression, i.e. for example a compressed image will be of slightly worse quality. This usually allows for much greater compression. Lossy compressors usually also additionally apply lossless compression as well.
|
|
|
|
Furthermore we may divide compression e.g. to offline (compresses a whole file, may take long) and streaming (compressing a stream of input data on-the-go and in real-time), by the type of input data (binary, text, audio, ...), basic principle ([RLE](rle.md), dictionary, "[AI](ai.md)", ...) etc.
|
|
|
|
The following is an example of how well different types of compression work for an image (screenshot of main page of Wikimedia Commons, 1280x800):
|
|
|
|
{ Though the website screenshot contained also real life photos, it still contained a lot of constant color areas which can be compressed very well, hence quite good compression ratios here. A general photo won't be compressed as much. ~drummyfish }
|
|
|
|
| compression | ~size (KB)| ratio |
|
|
| --------------------------------------------------- | --------- | ------ |
|
|
| none | 3000 | 1 |
|
|
| general lossless (lz4) | 396 | 0.132 |
|
|
| image lossless (PNG) | 300 | 0.1 |
|
|
| image lossy (JPG), nearly indistinguishable quality | 164 | 0.054 |
|
|
| image lossy (JPG), ugly but readable | 56 | 0.018 |
|
|
|
|
Mathematically there cannot exist a lossless compression algorithm that would always reduce the size of any input data -- if it existed, we could just repeatedly apply it and compress ANY data to zero bytes. And not only that -- **every lossless compression will inevitably enlarge some input files**. This is also mathematically given -- we can see compression as simply mapping input binary sequences to output (compressed) binary sequences, while such mapping has to be one-to-one ([bijective](bijection.md)); it can be simply shown that if we make any such mapping that reduces the size of some input (maps a longer sequence to a shorter one, i.e. compresses it), we will also have to map some short code to a longer one. However we can make it so that our compression algorithm enlarges a file at most by 1 bit: we can say that the first bit in the compressed data says whether the following data is compressed or not; if our algorithm fails to reduce the size of the input, it simply sets the bit to says so and leaves the original file uncompressed (in practice many algorithms don't do this though as they try to work as streaming filters, without random access to data, which would be needed here).
|
|
|
|
**Dude, how does compression really work tho?** The basic principle of lossless compression is **removing [redundancy](redundancy.md)** ([correlations](correlation.md) in the data), i.e. that which is explicitly stored in the original data but doesn't really have to be there because it can be reasoned out from the remaining data. This is why a completely random [noise](noise.md) can't be compressed -- there is no correlated data in it, nothing to reason out from other parts of the data. However human language for example contains many redundancies. Imagine we are trying to compress English text and have a word such as "computer" on the input -- we can really just shorten it to "computr" and it's still pretty clear the word is meant to be "computer" as there is no other similar English word (we also see that compression algorithm is always specific to the type of data we expect on the input -- we have to know what nature of the input data we can expect). Another way to remove redundancy is to e.g. convert a string such as "HELLOHELLOHELLOHELLOHELLO" to "5xHELLO". Lossy compression on the other hand tries to decide what information is of low importance and can be dropped -- for example a lossy compression of text might discard information about case (upper vs lower case) to be able to store each character with fewer bits; an all caps text is still readable, though less comfortably. A deeper view of compression oftentimes leads to a realization that compression is really a problem of [artificial intelligence](ai.md), for compression is really about prediction and prediction is about understanding -- this is where state-of-the-art view stands.
|
|
|
|
{ A quick intuitive example: [encyclopedias](encyclopedia.md) almost always have at the beginning a list of abbreviations they will use in the definition of terms (e.g. "m.a. -> middle ages", ...), this is so that the book gets shorter and they save money on printing. They compress the text. ~drummyfish }
|
|
|
|
**OK, but how much can we really compress?** Well, as stated above, there can never be anything such as a universal uber compression algorithm that just makes any input file super small -- everything really depends on the nature of the data we are trying to compress. The more we know about the nature of the input data, the more we can compress, so a general compression program will compress only a little, while an image-specialized compression program will compress better (but will only work with images). As an extreme example, consider that **in theory we can make e.g. an algorithm that compresses one specific 100GB video to 1 bit** (we just define that a bit "1" decompresses to this specific video), but it will only work for that one single video, not for video in general -- i.e. we made an extremely specialized compression and got an extremely good compression ratio, however due to such extreme specialization we can almost never use it. As said, we just cannot compress completely random data at all (as we don't know anything about the nature of such data). On the other hand data with a lot of redundancy, such as video, can be compressed A LOT. Similarly video compression algorithms used in practice work only for videos that appear in the real world which exhibit certain patterns, such as two consecutive frames being very similar -- if we try to compress e.g. static (white noise), video codecs just shit themselves trying to compress it (look up e.g. videos of confetti and see how blocky they get). All in all, some compression [benchmarks](benchmark.md) can be found e.g. at https://web.archive.org/web/20110203152015/http://www.maximumcompression.com/index.html -- the following are mentioned types of data and their best measured compression ratios: English text 0.12, image (lossy) 0.76, executable 0.24.
|
|
|
|
## Methods
|
|
|
|
The following is an overview of some most common compression techniques.
|
|
|
|
### Lossless
|
|
|
|
**[RLE](rle.md) (run length encoding)** is a simple method that stores repeated sequences just as one element of the sequence and number of repetitions, i.e. for example *"abcabcabc"* as *"3abc"*.
|
|
|
|
**[Entropy](entropy.md) coding** is another common technique which counts the frequencies ([probabilities](probability.md)) of symbols on the input and then assigns the shortest codes to the most frequent symbols, leaving longer codes to the less frequent. The most common such codings are **[Huffman coding](huffman_coding.md)** and **[Arithmetic coding](arithmetic_coding.md)**.
|
|
|
|
**Dictionary (substitutional) methods** try to construct a dictionary of relatively long symbols appearing in the input and then only store short references to these symbols. The format may for example choose to first store the dictionary and then the actual data with pointers to this dictionary, or it may just store the data in which pointers are stored to previously appearing sequences.
|
|
|
|
**[Predictor](predictor.md) compression** is based on making a *predictor* that tries to guess following data from previous values (which can be done e.g. in case of pictures, sound or text) and then only storing the difference against such a predicted result. If the predictor is good, we may only store the small amount of the errors it makes.
|
|
|
|
A famous family of dictionary compression algorithms are **Lempel-Ziv (LZ)** algorithms -- these two guys first proposed [LZ77](lz77.md) in (1977, sliding window) and [LZ78](lz78.md) (explicitly stored dictionary, 1978). These were a basis for improved/remix algorithms, most notably [LZW](lzw.md) (1984, Welch). Additionally these algorithms are used and combined in other algorithms, most notably [gif](gif.md) and [DEFLATE](deflate.md) (used e.g. in gzip and png).
|
|
|
|
An approach similar to the predictor may be trying to find some general mathematical [model](model.md) of the data and then just find and store the right parameters of the model. This may for example mean [vectorizing](vector_graphics.md) a bitmap image, i.e. finding geometrical shapes in an image composed of pixels and then only storing the parameters of the shapes -- of course this may not be 100% accurate, but again if we want to preserve the data accurately, we may additionally also store the small amount of errors the model makes. Similar approach is used in [vocoders](vocoder.md) used in cellphones that try to mathematically model human speech (however here the compression is lossy), or in [fractal](fractal.md) compression of images. A nice feature we gain here is the ability to actually "increase the resolution" (or rather generate detail) of the original data -- once we fit a model onto our data, we may use it to tell us values that are not actually present in the data (i.e. we get a fancy [interpolation](interpolation.md)/[extrapolation](extrapolation.md)).
|
|
|
|
Another property of data to exploit may be its sparsity -- if for example we have a huge image that's prevalently white, we may say white is the implicit color and we only somehow store the pixels of other colors.
|
|
|
|
Some more wild techniques may include [genetic programming](genetic_programming.md) that tries to evolve a small program that reproduces the input data, or using "[AI](ai.md)" in whatever way to compress the data (in fact compression is an essential part of many [neural networks](neural_network.md) as it forces the network to "understand", make sense of the data -- many neural networks therefore internally compress and decompress the data so as to filter out the unimportant information; [large language models](llm.md) are now starting to beat traditional compression algorithms at compression ratios).
|
|
|
|
Note that many of these methods may be **combined or applied repeatedly** as long as we are getting smaller results.
|
|
|
|
Furthermore also take a look at [procedural generation](procgen.md), a related technique that allows to embed a practically infinite amount of content with only quite small amount of code.
|
|
|
|
### Lossy
|
|
|
|
In lossy compression we generally try to limit information that is not very important and/or to which we aren't very sensitive, typically by dropping precision by [quantization](quantization.md), i.e. basically lowering the number of bits we use to store the "not so important" information -- in some cases we may just drop some information altogether (decrease precision to zero). Furthermore we finally also apply lossless compression to make the result even smaller.
|
|
|
|
For **images** we usually exploit the fact that human sight is less sensitive to certain visual information, such as specific frequencies, colors, brightness etc. Common methods used here are:
|
|
|
|
- Convert image from [RGB](rgb.md) to [YUV](yuv.md), leave the Y channel (brightness) as is and reduce resolution of the U an V (color) channels. This works because human eye is less sensitive to color than brightness.
|
|
- Convert the image to frequency domain (e.g. with [DCT](dct.md) or some [wavelet transform](wavelet_transform.md)) and quantize (allocate fewer bits to) higher frequencies. This exploits the fact that human eye is less sensitive to higher frequencies. This is the basis of e.g. [jpeg](jpg.md).
|
|
- Reduce the number of possible colors -- traditional RGB uses 8 bits for each R, G and B component and so each pixel takes 3 bytes, which allows for about 6 million colors. However using just 2 bytes (65 thousand colors) many times [suffices](good_enough.md) and saves 1/3rd of the size -- see [RGB565](rgb565.md). We may also utilize an image-specific [palette](palette.md) and save the image in indexed mode, i.e. compute a palette of let's say 256 most common colors in the image, then encode the image as the palette plus pixels, of which each will only take one byte! This saves almost 2/3rds of the size. The drop of quality can further be made less noticeable with [dithering](dithering.md).
|
|
- Reduce resolution -- plain simple. However this can be made smarter by e.g. trying to detect areas with few details and only reducing the resolution there.
|
|
|
|
In **video** compression we may reuse the ideas from image compression and further employ exploiting temporal redundancy, i.e. the fact that consecutive video frames look similar, so we may only encode some kind of delta (change) against the previous (or even next) frame. The most common way is to fully record only one key frame in some time span (so called I-frame, further compressed with image compression methods), then divide it to small blocks and estimate the movement of those blocks so that they approximately make up the following frames -- we then record only the motion vectors of the blocks. This is why videos look "blocky". In the past [interlacing](interlacing.md) was also used -- only half of each frame was recorded, every other row was dropped; when playing, the frame was interlaced with the previous frame. Another cool idea is keyframe [superresolution](superresolution.md): you store only some keyframes in full resolutions and store the rest of them in smaller size; during decoding you can use the nearby full scale keyframes to upscale the low res keyframes (search for matching subblocks in the low res image and match them to those in the big res image).
|
|
|
|
In **audio** we usually straight remove frequencies that humans can't hear (usually said to be above 20 kHz), for this we again convert the audio from spatial to frequency domain (using e.g. [Fourier transform](fourier_transform.md)). Furthermore it is very inefficient to store sample values directly -- we rather use so called *differential PCM*, a lossless compression that e.g. stores each sample as a difference against the previous sample (which is usually small and doesn't use up many bits). This can be improved by a predictor, which tries to predict the next values from previous values and then we only save the difference against this prediction. *Joint stereo coding* exploits the fact that human hearing is not so sensitive to the direction of the sound and so e.g. instead of recording both left and right stereo channels in full quality rather records the sum of both and a ratio between them (which can get away with fewer bits). *Psychoacoustics* studies how humans perceive sound, for example so called *masking*: certain frequencies may for example mask nearby (both in frequency and time) frequencies (make them unhearable for humans) so we can drop them. See also [vocoders](vocoder.md). For specific kinds of audio we may further employ more detailed knowledge, for example with instrumental [music](music.md) we can just store the notes that are being played plus instruments that play them, for example with [MIDI](midi.md) -- this format was not made for compression per se, but it does allow us to store music in much smaller size than directly storing audio.
|
|
|
|
TODO: LZW, DEFLATE etc.
|
|
|
|
## Compression Programs/Utils/Standards
|
|
|
|
Here is a list of some common compression programs/utilities/standards/formats/etc:
|
|
|
|
| util/format | extensions | free? | media | lossless? | notes |
|
|
| ----------------- | ---------- | ----- | ------------- | --------- | -------------------------------------------- |
|
|
|[bzip2](bzip2.md) | .bz2 | yes | general | yes | Burrows-Wheeler alg. |
|
|
|[flac](flac.md) | .flac | yes | audio | yes | super free lossless audio format |
|
|
|[gif](gif.md) | .gif |now yes| image/anim. | no | limited color palette, patents expired |
|
|
|[gzexe](gzexe.md) | | yes |executable bin.| yes | makes self-extracting executable |
|
|
|[gzip](gzip.md) | .gz | yes | general | yes | by GNU, DEFLATE, LZ77, mostly used by Unices |
|
|
|[jpeg](jpeg.md) | .jpg, .jpeg| yes? | raster image | no | common lossy format, under patent fire |
|
|
|[lz4](lz4.md) | .lz4 | yes | general | yes | high compression/decompression speed, LZ77 |
|
|
|[mp3](mp3.md) | .mp3 |now yes| audio | no | popular audio format, patents expired |
|
|
|[png](png.md) | .png | yes | raster image | yes | popular lossless image format, transparency |
|
|
|[rar](rar.md) | .rar | NO | general | yes | popular among normies, PROPRIETARY |
|
|
|[vorbis](vorbis.md)| .ogg | yes | audio | no | was a free alternative to mp3, used with ogg |
|
|
|[zip](zip.md) | .zip | yes? | general | yes | along with encryption may be patented |
|
|
|[7-zip](7zip.md) | .7z | yes | general | yes | more complex archiver |
|
|
|
|
## Code Example
|
|
|
|
Let's write a simple lossless compression utility in [C](c.md). It will work on binary files and we will use the simplest RLE method, i.e. our program will just shorten continuous sequences of repeating bytes to a short sequence saying "repeat this byte N times". Note that this is very primitive (a small improvement might be actually done by looking for sequences of longer words, not just single bytes), but it somewhat works for many files and demonstrates the basics.
|
|
|
|
The compression will work like this:
|
|
|
|
- We will choose some random, hopefully not very frequent byte value, as our special "marker value". Let's say this will be the value 0xF3.
|
|
- We will read the input file and whenever we encounter a sequence of 4 or more same bytes in a row, we will output these 3 bytes:
|
|
- the marker value
|
|
- byte whose values is the length of the sequence minus 4
|
|
- the byte to repeat
|
|
- If the marker value is encountered in input, we output 2 bytes:
|
|
- the marker value
|
|
- value 0xFF (which we won't be able to use for the length of the sequence)
|
|
- Otherwise we just output the byte we read from the input.
|
|
|
|
Decompression is then quite simple -- we simply output what we read, unless we read the marker value; in such case we look whether the following value is 0xFF (then we output the marker value), else we know we have to repeat the next character this many times plus 4.
|
|
|
|
For example given input bytes
|
|
|
|
```
|
|
0x11 0x00 0x00 0xAA 0xBB 0xBB 0xBB 0xBB 0xBB 0xBB 0x10 0xF3 0x00
|
|
\___________________________/ \__/
|
|
long repeating sequence marker!
|
|
```
|
|
|
|
Our algorithm will output a compressed sequence
|
|
|
|
```
|
|
0x11 0x00 0x00 0xAA 0xF3 0x02 0xBB 0x10 0xF3 0xFF 0x00
|
|
\____________/ \_______/
|
|
compressed seq. encoded marker
|
|
```
|
|
|
|
Notice that, as stated above in the article, there inevitably exists a "danger" of actually enlarging some files. This can happen if the file contains no sequences that we can compress and at the same time there appear the marker values which actually get expanded (from 1 byte to 2).
|
|
|
|
The nice property of our algorithm is that both compression and decompression can be streaming, i.e. both can be done in a single pass as a filter, without having to load the file into memory or randomly access bytes in files. Also the memory complexity of this algorithm is constant (RAM usage will be the same for any size of the file) and time complexity is linear (i.e. the algorithm is "very fast").
|
|
|
|
Here is the actual code of this utility (it reads from stdin and outputs to stdout, a flag `-x` is used to set decompression mode, otherwise it is compressing):
|
|
|
|
```
|
|
#include <stdio.h>
|
|
|
|
#define SPECIAL_VAL 0xf3 // random value, hopefully not very common
|
|
|
|
void compress(void)
|
|
{
|
|
unsigned char prevChar = 0;
|
|
unsigned int seqLen = 0;
|
|
unsigned char end = 0;
|
|
|
|
while (!end)
|
|
{
|
|
int c = getchar();
|
|
|
|
if (c == EOF)
|
|
end = 1;
|
|
|
|
if (c != prevChar || c == SPECIAL_VAL || end || seqLen > 200)
|
|
{ // dump the sequence
|
|
if (seqLen > 3)
|
|
printf("%c%c%c",SPECIAL_VAL,seqLen - 4,prevChar);
|
|
else
|
|
for (int i = 0; i < seqLen; ++i)
|
|
putchar(prevChar);
|
|
|
|
seqLen = 0;
|
|
}
|
|
|
|
prevChar = c;
|
|
seqLen++;
|
|
|
|
if (c == SPECIAL_VAL)
|
|
{
|
|
// this is how we encode the special value appearing in the input
|
|
putchar(SPECIAL_VAL);
|
|
putchar(0xff);
|
|
seqLen = 0;
|
|
}
|
|
}
|
|
}
|
|
|
|
void decompress(void)
|
|
{
|
|
unsigned char end = 0;
|
|
|
|
while (1)
|
|
{
|
|
int c = getchar();
|
|
|
|
if (c == EOF)
|
|
break;
|
|
|
|
if (c == SPECIAL_VAL)
|
|
{
|
|
unsigned int seqLen = getchar();
|
|
|
|
if (seqLen == 0xff)
|
|
putchar(SPECIAL_VAL);
|
|
else
|
|
{
|
|
c = getchar();
|
|
|
|
for (int i = 0; i < seqLen + 4; ++i)
|
|
putchar(c);
|
|
}
|
|
}
|
|
else
|
|
putchar(c);
|
|
}
|
|
}
|
|
|
|
int main(int argc, char **argv)
|
|
{
|
|
if (argc > 1 && argv[1][0] == '-' && argv[1][1] == 'x' && argv[1][2] == 0)
|
|
decompress();
|
|
else
|
|
compress();
|
|
|
|
return 0;
|
|
}
|
|
```
|
|
|
|
How well does this perform? If we try to let the utility compress its own source code, we get to 1242 bytes from the original 1344, which is not so great -- the compression ratio is only about 92% here. We can see why: the only repeating bytes in the source code are the space characters used for indentation -- this is the only thing our primitive algorithm manages to compress. However if we let the program compress its own binary version, we get much better results (at least on the computer this was tested on): the original binary has 16768 bytes while the compressed one has 5084 bytes, which is an EXCELLENT compression ratio of 30%! Yay :-)
|
|
|
|
## See Also
|
|
|
|
- [procedural generation](procgen.md)
|
|
- [minification](minification.md)
|