This commit is contained in:
Miloslav Ciz 2023-03-23 17:16:00 +01:00
parent f48a2a80fa
commit c4013d7128
4 changed files with 5 additions and 11 deletions

View file

@ -54,6 +54,6 @@ Aliasing is also a common problem in [computer graphics](computer_graphics.md).
The same thing may happen in [ray tracing](ray_tracing.md) if we shoot a single sampling ray for each screen pixel. Note that [interpolation/filtering](interpolation.md) of textures won't fix texture aliasing. What can be used to reduce texture aliasing are e.g. by [mipmaps](mip.md) which store the texture along with its lower resolution versions -- during rendering a lower resolution of the texture is chosen if the texture is rendered as a smaller size, so that the sampling theorem is satisfied. However this is still not a silver bullet because the texture may e.g. be shrink in one direction but enlarged in other dimension (this is addressed by [anisotropic filtering](anisotropic_filtering.md)). However even if we sufficiently suppress aliasing in textures, aliasing can still appear in geometry. This can be reduced by [multisampling](multisampling.md), e.g. sending multiple rays for each pixel and then averaging their results -- by this we **increase our sampling frequency** and lower the probability of aliasing.
**Why doesn't aliasing happen in our eyes and ears?** Because our senses don't sample the world discretely, i.e. in single points -- our senses [integrate](integration.md). E.g. a rod or a cone in our eyes doesn't just see exactly one point in the world, it sees an averaged light over a small area, and it also doesn't sample the world at specific moments like cameras do, its excitation by light falls off gradually which averages the light over time, preventing temporal aliasing.
**Why doesn't aliasing happen in our eyes and ears?** Because our senses don't sample the world discretely, i.e. in single points -- our senses [integrate](integration.md). E.g. a rod or a cone in our eyes doesn't just see exactly one point in the world but rather an averaged light over a small area (which is ideally right next to another small area seen by another cell, so there is no information to "hide" in between them), and it also doesn't sample the world at specific moments like cameras do, its excitation by light falls off gradually which averages the light over time, preventing temporal aliasing.
So all in all, **how to prevent aliasing?** As said above, we always try to satisfy the sampling theorem, i.e. make our sampling frequency at least twice as high as the highest frequency in the signal we're sampling, or at least get close to this situation and lower the probability of aliasing. This can be done by either increasing sampling frequency (which can be done smart, some methods try to detect where sampling should be denser), or by preprocessing the input signal with a low pass filter or otherwise ensure there won't be too high frequencies.