This commit is contained in:
Miloslav Ciz 2024-03-21 20:00:23 +01:00
parent 950ac93987
commit da70f0a553
27 changed files with 1778 additions and 1710 deletions

View file

@ -34,4 +34,8 @@ There is a tiny [suckless](suckless.md)/[LRS](lrs.md) library for real-time 3D:
**Real-time** 3D typically uses an **object-order** rendering, i.e. iterating over objects in the scene and drawing them onto the screen (i.e. we draw object by object). This is a fast approach but has disadvantages such as (usually) needing a memory inefficient [z-buffer](z_buffer.md) to not overwrite closer objects with more distant ones. It is also pretty difficult to implement effects such as shadows or reflections in object-order rendering. The 3D models used in real-time 3D are practically always made of **triangles** (or other polygons) because the established GPU pipelines work on the principle of drawing polygons.
**Offline rendering** (non-real-time, e.g. 3D movies) on the other hand mostly uses **image-order** algorithms which go pixel by pixel and for each one determine what color the pixel should have. This is basically done by casting a ray from the camera's position through the "pixel" position and calculating which objects in the scene get hit by the ray; this then determines the color of the pixel. This more accurately models how rays of light behave in real life (even though in real life the rays go the opposite way: from lights to the camera, but this is extremely inefficient to simulate). The advantage of this process is a much higher realism and the implementation simplicity of many effects like shadows, reflections and refractions, and also the possibility of having other than polygonal 3D models (in fact smooth, mathematically described shapes are normally much easier to check ray intersections with). Algorithms in this category include [ray tracing](ray_tracing.md) or [path tracing](path_tracing.md). In recent years we've seen these methods brought, in a limited way, to real-time graphics on the high end GPUs.
**Offline rendering** (non-real-time, e.g. 3D movies) on the other hand mostly uses **image-order** algorithms which go pixel by pixel and for each one determine what color the pixel should have. This is basically done by casting a ray from the camera's position through the "pixel" position and calculating which objects in the scene get hit by the ray; this then determines the color of the pixel. This more accurately models how rays of light behave in real life (even though in real life the rays go the opposite way: from lights to the camera, but this is extremely inefficient to simulate). The advantage of this process is a much higher realism and the implementation simplicity of many effects like shadows, reflections and refractions, and also the possibility of having other than polygonal 3D models (in fact smooth, mathematically described shapes are normally much easier to check ray intersections with). Algorithms in this category include [ray tracing](ray_tracing.md) or [path tracing](path_tracing.md). In recent years we've seen these methods brought, in a limited way, to real-time graphics on the high end GPUs.
## See Also
- [computational photography](computational_photo.md)