less_retarded_wiki/digital.md
2024-11-02 20:06:14 +01:00

8.2 KiB

Digital

Digital computer technology is that which works with whole numbers, i.e. discrete values, as opposed to analog technology which works with real numbers, i.e. continuous values (note: do not confuse things such as floating point with truly continuous values!). The name digital is related to the word digit as digital computers store data by digits, e.g. in 1s and 0s if they work in binary. By extension the word digital is also used to indicate something works based on digital technology, for example "digital currency", "digital music" etc.

Normies confuse digital with electronic or think that digital computers can only be electronic, that digital computers can only work in binary or have other weird assumptions whatsoever. This is indeed false! An abacus is a digital device, a book with text is a digital data storage. Fucking normies RIP.

{ Apparently it is "digitisation", not "digitalization". ~drummyfish }

The advantage of digital technology is its resilience to noise which prevents degradation of data and accumulation of error -- if a digital picture is copied a billion times, it will very likely remain unchanged, whereas performing the same operation with analog picture would probably erase most of the information it bears due to loss of quality in each copy. Digital technology also makes it easy and practically possible to create fully programmable general purpose computers of great complexity.

A typical example of analog versus digital technology is wrist watches: analog ones have rotating hands to show time, digital ones use digits -- note that it doesn't matter if is the watch is electronic or not, the distinction is in how time is shown. A hand rotates continuously, it may be positioned at any arbitrary angle, basically with "infinite resolution", whereas digits are discrete, non-continuous -- a digit will instantly switch to being a different digit. This is the distinction between analog and digital.

Another simple example: imagine you draw two pictures with a pencil: one in a normal fashion on a normal paper, the other one on a grid paper, by filling specific squares black (making kind of manual pixelart). The first picture is analog, i.e. it records continuous curves and position of each point of these curves can be measured down to extremely small fractions of millimeters -- the advantage is that you are not limited by any grid and can draw any shape at any position on the paper, make any wild curves with very fine details, theoretically even microscopic ones, you have infinite space of possibilities at your disposal. The other picture (on a square grid) is digital, it is composed of separate points whose position is described only by whole numbers (x and y coordinates of the filled grid squares), the disadvantage is that you are limited by only being able to fill squares on predefined positions so your picture will look blocky and limited in amount of detail it can capture (anything smaller than a single grid square can't be captured properly), the resolution of the grid is limited as well as the number of possible pictures you can draw this way, but as we'll see, imposing this limitations has advantages. Consider e.g. the advantage of the grid paper image with regards to copying: if someone wants to copy your grid paper image, it will be relatively easy and he can copy it exactly, simply by filling the exact same squares you have filled -- small errors and noise such as imperfectly filled squares can be detected and corrected thanks to the fact that we have limited ourselves with the grid, we know that even if some square is not filled perfectly, it was probably meant to be filled and we can eliminate this kind of noise in the copy. This way we can copy the grid paper image a million times and it won't change. On the other hand the normal, non-grid image will become distorted with every copy and in fact even the original image will become distorted by aging; even if that who is copying the image tries to trace it extremely precisely, small errors will appear and these errors will accumulate in further copies, and any noise that appears in the image or in the copies is a problem because we don't know if it really is a noise or something that was meant to be in the image.

But this is not to say digital data can't become distorted too -- it can. It is just less likely and it's easier to deal with this. It for example happens that space particles (and similar physics phenomena, e.g. electronic interference) flip bits in computer memory, i.e. there is always a probability of some bit flipping from 0 to 1 or vice versa. We call this data corruption. This may also happen due to physical damage to digital media (e.g. scratches on the surface of CDs), imperfections in computer network transmissions (e.g. packet loss over wifi) etc. However we can introduce further measures to prevent, detect and correct data corruption, e.g. by keeping redundant copies (2 copies of data allow detecting corruption, 3 copies allow even its correction), keeping checksums or hashes (which allow only detection of corruption but don't take much extra space), employing error correcting codes etc. We have to keep in mind that data corruption is very dangerous because a small local damage may destroy the whole data (owing partially to our wrong assumption that digital data data won't be damaged), while local damage to analog data will typically only destroys that one small affected part, keeping the rest intact. So let's be aware of this.

Another way in which digital data can degrade similarly to analog data is reencoding between lossy-compressed formats (in the spirit of the famous "needs more jpeg" meme). A typical example is digital movies: as new standard for video encoding are emerging, old movies are being reconverted from old formats to the new ones, however as video is quite heavily lossy-compressed, losses and distortion of information happens between the reencodings. This is best seen in videos and images circulating on the internet that are constantly being ripped and converted between different formats. This way it may happen that digital movies recorded nowadays may only survive into the future in very low quality, just like old analog movies survived until today in degraded quality. This can be prevented by storing the original data only with lossless compression and with each new emerging format create the release of the data from the original.

Digital vs analog is also discussed from artistic points of view, especially in video and audio recording, i.e. movies and music. Digital and analog media differ qualitatively -- a movie shot on film (analog medium) looks different than one shot on digital camera, film captures light a bit differently, it has a different kind of noise etc. It is possible to try to simulate the "analog look" with postprocessing filters but the results are always far from perfect; we have to realize that it is IMPOSSIBLE to make an analog version from digital recording because when we are capturing real life, we only capture a tiny bit of of information and lose the rest -- analog and digital recorders will capture different (even if mostly overlapping) parts of the real world and once we have the data, we can't retrieve that what has been thrown away. It's similar to wanting to extract an infrared photography from visible light photography -- they look similar but one can't be made from the other. So the decision has to really be made before recording. Now it is generally agreed that analog is aesthetically superior: it is kind of "softer", has nicer colors and the analog noise (unlike digital one) is very pleasant; digital recordings are clearer, sharper but basically sterile and soulless. The reason for using digital for all mainstream movies and songs nowadays is purely economical, it is just too much cheaper, quicker, faster and much more convenient to use digital (but the result looks like shit).