Update
This commit is contained in:
parent
6766c52da6
commit
14fc271097
9 changed files with 158 additions and 10 deletions
|
@ -57,7 +57,7 @@ An approach similar to the predictor may be trying to find some general mathemat
|
|||
|
||||
Another property of data to exploit may be its sparsity -- if for example we have a huge image that's prevalently white, we may say white is the implicit color and we only somehow store the pixels of other colors.
|
||||
|
||||
Some more wild techniques may include [genetic programming](genetic_programming.md) that tries to evolve a small program that reproduces the input data, or using "[AI](ai.md)" in whatever way to compress the data (in fact compression is an essential part of many [neural networks](neural_network.md) as it forces the network to "understand", make sense of the data -- many neural networks therefore internally compress and decompress the data so as to filter out the unimportant information).
|
||||
Some more wild techniques may include [genetic programming](genetic_programming.md) that tries to evolve a small program that reproduces the input data, or using "[AI](ai.md)" in whatever way to compress the data (in fact compression is an essential part of many [neural networks](neural_network.md) as it forces the network to "understand", make sense of the data -- many neural networks therefore internally compress and decompress the data so as to filter out the unimportant information; [large language models](llm.md) are now starting to beat traditional compression algorithms at compression ratios).
|
||||
|
||||
Note that many of these methods may be **combined or applied repeatedly** as long as we are getting smaller results.
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue