This commit is contained in:
Miloslav Ciz 2024-08-04 16:49:53 +02:00
parent 9fc5ae8d5b
commit 275c83d379
27 changed files with 1857 additions and 1819 deletions

View file

@ -56,6 +56,6 @@ Aliasing is also a common problem in [computer graphics](computer_graphics.md).
The same thing may happen in [ray tracing](ray_tracing.md) if we shoot a single sampling ray for each screen pixel. Note that [interpolation/filtering](interpolation.md) of textures won't fix texture aliasing. What can be used to reduce texture aliasing are e.g. by [mipmaps](mipmap.md) which store the texture along with its lower resolution versions -- during rendering a lower resolution of the texture is chosen if the texture is rendered as a smaller size, so that the sampling theorem is satisfied. However this is still not a silver bullet because the texture may e.g. be shrink in one direction but enlarged in other dimension (this is addressed by [anisotropic filtering](anisotropic_filtering.md)). However even if we sufficiently suppress aliasing in textures, aliasing can still appear in geometry. This can be reduced by [multisampling](multisampling.md), e.g. sending multiple rays for each pixel and then averaging their results -- by this we **increase our sampling frequency** and lower the probability of aliasing.
**Why doesn't aliasing happen in our eyes and ears?** Because our senses don't sample the world discretely, i.e. in single points -- our senses [integrate](integration.md). E.g. a rod or a cone in our eyes doesn't just see exactly one point in the world but rather an averaged light over a small area (which is ideally right next to another small area seen by another cell, so there is no information to "hide" in between them), and it also doesn't sample the world at specific moments like cameras do, its excitation by light falls off gradually which averages the light over time, preventing temporal aliasing (instead of aliasing we get [motion blur](motion_blur.md)).
**Why doesn't aliasing happen in our eyes and ears?** Because our senses don't sample the world discretely, i.e. in single points -- our senses [integrate](integration.md). E.g. a rod or a cone in our eyes doesn't just see exactly one point in the world but rather an averaged light over a small area (which is ideally right next to another small area seen by another cell, so there is no information to "hide" in between them), and it also doesn't sample the world at specific moments like cameras do, its excitation by light falls off gradually which averages the light over time, preventing temporal aliasing (instead of aliasing we get [motion blur](motion_blur.md)). Also our brain does a lot of filtering and postprocessing of the raw input, what we see is not really what comes out of the retina, so EVEN IF there was a bit of aliasing here and there (because of some blind spots or something maybe?), the brain would probably learn to filter it out with "AI-style" magic, just like it filters out noise in low light conditions and so on.
So all in all, **how to prevent aliasing?** As said above, we always try to satisfy the sampling theorem, i.e. make our sampling frequency at least twice as high as the highest frequency in the signal we're sampling, or at least get close to this situation and lower the probability of aliasing. This can be done by either increasing sampling frequency (which can be done smart, some methods try to detect where sampling should be denser), or by preprocessing the input signal with a low pass filter or otherwise ensure there won't be too high frequencies (e.g. using lower resolution textures).