This commit is contained in:
Miloslav Ciz 2022-08-07 16:02:27 +02:00
parent e6348633fe
commit 90e2996761

View file

@ -4,6 +4,8 @@ In [computer graphics](graphics.md) 3D rendering is concerned with computing ima
There are many methods and [algorithms](algorithm.md) for doing so differing in many aspects such as computation complexity, implementation complexity, realism of the result, representation of the 3D data, limitations of viewing and so on. If you are just interested in the [realtime](realtime.md) 3D rendering used in [gaymes](game.md) nowadays, you are probably interested in [GPU](gpu.md)-accelerated 3D [rasterization](rasterization.md) with [APIs](api.md) such as [OpenGL](opengl.md) and [Vulkan](vulkan.md). There are many methods and [algorithms](algorithm.md) for doing so differing in many aspects such as computation complexity, implementation complexity, realism of the result, representation of the 3D data, limitations of viewing and so on. If you are just interested in the [realtime](realtime.md) 3D rendering used in [gaymes](game.md) nowadays, you are probably interested in [GPU](gpu.md)-accelerated 3D [rasterization](rasterization.md) with [APIs](api.md) such as [OpenGL](opengl.md) and [Vulkan](vulkan.md).
[LRS](lrs.md) has a 3D rendering library called [small3dlib](small3dlib.md).
## Methods ## Methods
A table of some common 3D rendering methods follows, including the most simple, most advanced and some unconventional ones. Note that here we talk about methods and techniques rather than algorithms, i.e. general approaches that are often modified and combined into a specific rendering algorithm. For example the traditional triangle rasterization is sometimes combined with raytracing to add e.g. realistic reflections. The methods may also be further enriched with features such as [texturing](texture.md), [antialiasing](antialiasing.md) and so on. The table below should help you choose the base 3D rendering method for your specific program. A table of some common 3D rendering methods follows, including the most simple, most advanced and some unconventional ones. Note that here we talk about methods and techniques rather than algorithms, i.e. general approaches that are often modified and combined into a specific rendering algorithm. For example the traditional triangle rasterization is sometimes combined with raytracing to add e.g. realistic reflections. The methods may also be further enriched with features such as [texturing](texture.md), [antialiasing](antialiasing.md) and so on. The table below should help you choose the base 3D rendering method for your specific program.
@ -51,18 +53,33 @@ TODO: VoxelQuest has some innovative voxel rendering, check it out (https://www.
You may have come here just to learn about the typical realtime 3D rendering used in today's [games](game.md) because aside from research and niche areas this kind of 3D is what we normally deal with in practice. This is what this section is about. You may have come here just to learn about the typical realtime 3D rendering used in today's [games](game.md) because aside from research and niche areas this kind of 3D is what we normally deal with in practice. This is what this section is about.
Nowadays this kind of 3D stands for a [GPU](gpu.md) accelerated 3D [rasterization](rasterization.md) done with rendering APIs such as [OpenGL](opengl.md), [Vulkan](vulkan.md), [Direct3D](d3d.md) or [Metal](metal.md) (the last two being [proprietary](proprietary.md) and therefore [shit](shit.md)) and higher level engines above them, e.g. [Godot](godot.md), [OpenSceneGraph](osg.md) etc. Nowadays this kind of 3D stands for a [GPU](gpu.md) accelerated 3D [rasterization](rasterization.md) done with rendering [APIs](api.md) such as [OpenGL](opengl.md), [Vulkan](vulkan.md), [Direct3D](d3d.md) or [Metal](metal.md) (the last two being [proprietary](proprietary.md) and therefore [shit](shit.md)) and higher level engines above them, e.g. [Godot](godot.md), [OpenSceneGraph](osg.md) etc. The methods seem to be evolving to some kind of rasterization/[pathtracing](pathtracing.md) hybrid, but rasterization is still the basis.
This mainstream rendering uses an [object order](object_order.md) approach (it blits 3D objects onto the screen rather than determining each pixel's color separately) and works on the principle of **triangle rasterization**, i.e. 3D models are composed of triangles (or higher polygons which are however eventually broken down into triangles) and these triangles are projected onto the screen according to the position of the virtual camera and laws of [perspective](perspective.md). Projecting the triangles means finding the 2D screen coordinates of each of the triangle's three vertices -- once we have thee coordinates, we draw (rasterize) the triangle to the screen just as a "normal" 2D triangle (well, with some asterisks). This mainstream rendering uses an [object order](object_order.md) approach (it blits 3D objects onto the screen rather than determining each pixel's color separately) and works on the principle of **triangle rasterization**, i.e. 3D models are composed of triangles (or higher polygons which are however eventually broken down into triangles) and these triangles are projected onto the screen according to the position of the virtual camera and laws of [perspective](perspective.md). Projecting the triangles means finding the 2D screen coordinates of each of the triangle's three vertices -- once we have thee coordinates, we draw (rasterize) the triangle to the screen just as a "normal" 2D triangle (well, with some asterisks).
Furthermore things such as [z-buffering](z_buffer.md) (for determining correct overlap of triangles) and [double buffering](double_buffering.md) are used, which makes this approach very memory ([RAM](ram.md)/[VRAM](vram.md)) expensive -- of course mainstream computers have more than enough memory but smaller computers (e.g. [embedded](embedded.md)) may suffer and be unable to handle this kind of rendering. Thankfully it is possible to adapt and imitate this kind of rendering even on "small" computers -- even those that don't have a GPU, i.e. with pure [software rendering](sw_rendering.md). For this we e.g. replace z-buffering with [painter's algorithm](painters_algorithm.md) (triangle sorting), drop features like [perspective correction](perspective_correction.md), [MIP mapping](mip_mapping.md) etc. (of course quality of the output will go down). Furthermore things such as [z-buffering](z_buffer.md) (for determining correct overlap of triangles) and [double buffering](double_buffering.md) are used, which makes this approach very memory ([RAM](ram.md)/[VRAM](vram.md)) expensive -- of course mainstream computers have more than enough memory but smaller computers (e.g. [embedded](embedded.md)) may suffer and be unable to handle this kind of rendering. Thankfully it is possible to adapt and imitate this kind of rendering even on "small" computers -- even those that don't have a GPU, i.e. with pure [software rendering](sw_rendering.md). For this we e.g. replace z-buffering with [painter's algorithm](painters_algorithm.md) (triangle sorting), drop features like [perspective correction](perspective_correction.md), [MIP mapping](mip_mapping.md) etc. (of course quality of the output will go down).
Also additionally there's a lot of [bloat](bloat.md) added in such as complex [screen space](screen_space.md) shaders, [pathtracing](pathtracing.md) (popularly known as *raytracing*), [megatexturing](megatexturing.md), [shadow rendering](shadow.md), [postprocessing](postprocessing.md), [compute shaders](compute_shader.md) etc. This may make it difficult to get into "modern" 3D rendering. Remember to [keep it simple](kiss.md).
On PCs the whole rendering process is hardware-accelerated with a [GPU](gpu.md) (graphics card). GPU is a special hardware capable of performing many operations in [parallel](parallelism.md) (as opposed to a [CPU](cpu.md) which mostly computes sequentially with low level of parallelism) -- this is great for graphics because we can for example perform mapping and drawing of many triangles at once, greatly increasing the speed of rendering ([FPS](fps.md)). However this hugely increases the [complexity](complexity.md) of the whole rendering system, we have to have a special [API](api.md) and [drivers](driver.md) for communication with the GPU and we have to upload data (3D models, textures, ...) to the GPU before we want to render them. [Debugging](debugging.md) gets a lot more difficult. On PCs the whole rendering process is hardware-accelerated with a [GPU](gpu.md) (graphics card). GPU is a special hardware capable of performing many operations in [parallel](parallelism.md) (as opposed to a [CPU](cpu.md) which mostly computes sequentially with low level of parallelism) -- this is great for graphics because we can for example perform mapping and drawing of many triangles at once, greatly increasing the speed of rendering ([FPS](fps.md)). However this hugely increases the [complexity](complexity.md) of the whole rendering system, we have to have a special [API](api.md) and [drivers](driver.md) for communication with the GPU and we have to upload data (3D models, textures, ...) to the GPU before we want to render them. [Debugging](debugging.md) gets a lot more difficult.
GPU nowadays are kind of general devices that can be used for more than just 3D rendering (e.g. [crypto](crypto.md) mining) and can no longer perform 3D rendering by themselves -- for this they have to be programmed. I.e. if we want to use a GPU for rendering, not only do we need a GPU but also some extra code. This code is provided by "systems" such as [OpenGL](opengl.md) or [Vulkan](vulkan.md) which consist of an [API](api.md) (an [interface](interface.md) we use from a [programming language](programming_language.md)) and the underlying implementation in a form of a [driver](driver.md). Any such rendering system has its own architecture and details of how it works, so we have to study it a bit if we want to use it. GPU nowadays are kind of general devices that can be used for more than just 3D rendering (e.g. [crypto](crypto.md) mining) and can no longer perform 3D rendering by themselves -- for this they have to be programmed. I.e. if we want to use a GPU for rendering, not only do we need a GPU but also some extra code. This code is provided by "systems" such as [OpenGL](opengl.md) or [Vulkan](vulkan.md) which consist of an [API](api.md) (an [interface](interface.md) we use from a [programming language](programming_language.md)) and the underlying implementation in a form of a [driver](driver.md) (e.g. [Mesa3D](mesa3d.md)). Any such rendering system has its own architecture and details of how it works, so we have to study it a bit if we want to use it.
The important part of a system such as OpenGL is its **rendering [pipeline](pipeline.md)**. Pipeline is the "path" through which data go through the rendering process. Each rendering system and even potentially each of its version may have a slightly different pipeline (but generally all mainstream pipelines somehow achieve rasterizing triangles, the difference is in details of how they achieve it). The pipeline consists of **stages** that follow one after another (e.g. the mentioned mapping of vertices and drawing of triangles constitute separate stages). A very important fact is that some (not all) of these stages are programmable with so called **[shaders](shader.md)**. A shader is a program written in a special language (e.g. [GLSL](glsl.md) for OpenGL) running on the GPU that processes the data in some stage of the pipeline (therefore we distinguish different types of shaders based on at which part of the pipeline they reside). In early GPUs stages were not programmable but they became so as to give a greater flexibility -- shaders allow us to implement all kinds of effects that would otherwise be impossible. The important part of a system such as OpenGL is its **rendering [pipeline](pipeline.md)**. Pipeline is the "path" through which data go through the rendering process. Each rendering system and even potentially each of its version may have a slightly different pipeline (but generally all mainstream pipelines somehow achieve rasterizing triangles, the difference is in details of how they achieve it). The pipeline consists of **stages** that follow one after another (e.g. the mentioned mapping of vertices and drawing of triangles constitute separate stages). A very important fact is that some (not all) of these stages are programmable with so called **[shaders](shader.md)**. A shader is a program written in a special language (e.g. [GLSL](glsl.md) for OpenGL) running on the GPU that processes the data in some stage of the pipeline (therefore we distinguish different types of shaders based on at which part of the pipeline they reside). In early GPUs stages were not programmable but they became so as to give a greater flexibility -- shaders allow us to implement all kinds of effects that would otherwise be impossible.
Let's see a typical pipeline, similar to something we might see e.g. in OpenGL (we simulate this also in [software renderers](sw_rendering.md)): Let's see what a typical pipeline might look like, similarly to something we might see e.g. in OpenGL. We normally simulate such a pipeline also in [software renderers](sw_rendering.md). Note that the details such as the coordinate system [handedness](handedness.md) and presence, order, naming or programmability of different stages will differ in any particular pipeline, this is just one possible scenario:
TODO 1. Vertex data (e.g. 3D [model space](model_space.md) coordinates of triangle vertices of a 3D model) are taken from a vertex buffer (a GPU memory to which the data have been uploaded).
2. **Stage: [vertex shader](vertex_shader.md)**: Each vertex is processed with a vertex shader, i.e. one vertex goes into the shader and one vertex (processed) goes out. Here the shader typically maps the vertex 3D coordinates to the screen 2D coordinates (or [normalized device coordinates](ndc.md)) by:
- multiplying the vertex by a [model matrix](model_matrix.md) (transforms from [model space](model_space.md) to [world space](world_space.md), i.e. applies the model move/rotate/scale operations)
- multiplying by [view matrix](view_matrix.md) (transforms from [world space](world_space.md) to [camera space](camera_space.md), i.e. takes into account camera position and rotation)
- multiplying by [projection matrix](projection_matrix.md) (applies perspective, transforms from [camera space](camera_space.md) to [screen space](screen_space.md) in [homogeneous coordinates](homogeneous_coordinates.md))
3. Possible optional stages that follow are [tessellation](tessellation.md) and geometry processing ([tessellation shaders](tessellation_shader.md) and [geometry shader](geometry_shader.md)). These offer possibility of advanced vertex processing (e.g. generation of extra vertices which vertex shaders are unable to do).
4. **Stage: vertex post processing**: Usually not programmable (no shaders here). Here the GPU does things such as [clipping](clipping.md) (handling vertices outside the screen space), primitive assembly and [perspective divide](perspective_divide.md) (transforming from [homogeneous coordinates](homogeneous coordinates.md) to traditional [cartesian coordinates](cartesian_coordinates.md)).
5. **Stage: [rasterization](rasterization.md)**: Usually not programmable, the GPU here turns triangles into actual [pixels](pixel.md) (or [fragments](fragment.md)), possibly applying [backface culling](backface_culling.md), [perspective correction](perspective_correction.md) and things like [stencil test](stencil_test.md) and [depth test](dept_test.md) (even though if fragment shaders are allowed to modify depth, this may be postpones to later).
6. **Stage: pixel/fragment processing**: Each pixel (fragment) produced by rasterization is processed here by a [pixel/fragment shader](fragment_shader.md). The shader is passed the pixel/fragment along with its coordinates, depth and possibly other attributes, and outputs a processed pixel/fragment with a specific color. Typically here we perform [shading](shading.md) and [texturing](texture.md) (pixel/fragment shaders can access texture data which are again stored in texture buffers on the GPU).
7. Now the pixels are written to the output buffer which will be shown on screen. This can potentially be preceded by other operations such as depth tests, as mentioned above.
TODO: example of specific data going through the pipeline