From fa845d3bee69321ab6c97aa0783711f18c56c2a4 Mon Sep 17 00:00:00 2001 From: Miloslav Ciz Date: Wed, 5 Apr 2023 21:52:44 +0200 Subject: [PATCH] Update --- 3d_rendering.md | 154 ++++++++++++++++++++++++++++++++++++++++++- productivity_cult.md | 2 + 2 files changed, 153 insertions(+), 3 deletions(-) diff --git a/3d_rendering.md b/3d_rendering.md index dfb93ba..548ece6 100644 --- a/3d_rendering.md +++ b/3d_rendering.md @@ -8,6 +8,8 @@ There are many methods and [algorithms](algorithm.md) for doing so differing in ## Methods +As most existing 3D "frameworks" are harmful, a [LRS](lrs.md) programmer is likely to write his own 3D rendering system that suits his program best, therefore we should list some common methods of achieving 3D. Besides that, it's just pretty interesting to see what there is in the store. + A table of some common 3D rendering methods follows, including the most simple, most advanced and some unconventional ones. Note that here we talk about methods and techniques rather than algorithms, i.e. general approaches that are often modified and combined into a specific rendering algorithm. For example the traditional triangle rasterization is sometimes combined with raytracing to add e.g. realistic reflections. The methods may also be further enriched with features such as [texturing](texture.md), [antialiasing](antialiasing.md) and so on. The table below should help you choose the base 3D rendering method for your specific program. The methods may be tagged with the following: @@ -49,11 +51,157 @@ TODO: find out how build engine/slab6 voxel rendering worked and possibly add it TODO: VoxelQuest has some innovative voxel rendering, check it out (https://www.voxelquest.com/news/how-does-voxel-quest-work-now-august-2015-update) +## 3D Rendering Basics For Nubs + +If you're a complete noob and are asking what the essence of 3D is or just how to render simple 3Dish pictures for your game without needing a PhD, here's the very basics. Yes, you can use some 3D engine such as [Godot](godot.md) that has all the 3D rendering preprogrammed, but you you'll surrender to [bloat](bloat.md), you won't really know what's going on and your ability to tinker with the rendering or optimizing it will be basically zero... AND you'll miss on all the [fun](fun.md) :) So here we just foreshadow some concepts you should start with if you want to program your own 3D rendering. + +The absolute basic thing in 3D is probably **[perspective](perspective.md)**, or the concept which says that "things further away look smaller". This is basically the number one thing you need to know and with which you can make simple 3D pictures, even though there are many more effects and concepts that "make pictures look 3D" and which you can potentially study later (lighting, shadows, [focus and blur](depth_of_field.md), [stereoscopy](stereo.md), [parallax](parallax.md), visibility/obstruction etc.). { It's probably possible to make something akin "3D" even without perspective, just with [orthographic](ortho.md) projection, but that's just getting to details now. Let's just suppose we need perspective. ~drummyfish } + +If you don't have rotating camera and other fancy things, perspective is actually mathematically very simple, you basically just **divide the object's size by its distance from the viewer**, i.e. its Z coordinate (you may divide by some multiple of Z coordinate, e.g. by 2 * Z to get different [field of view](fov.md)) -- the further away it is, the bigger number its size gets divided by so the smaller it becomes. This "dividing by distance" ultimately applies to all distances, so in the end even the details on the object get scaled according to their individual distance, but as a first approximation you may just consider scaling objects as a whole. Just keep in mind you should only draw objects whose Z coordinate is above some threshold (usually called a *near plane*) so that you don't divide by 0! With this "dividing by distance" trick you can make an extremely simple "3Dish" renderer that just draws [sprites](sprite.md) on the screen and scales them according to the perspective rules (e.g. some space simulator where the sprites are balls representing planets). There is one more thing you'll need to handle: **[visibility](visibility.md)**, i.e. nearer objects have to cover the further away objects -- you can do this by simply [sorting](sorting.md) the objects by distance and drawing them back-to-front ([painter's algorithm](painters_algorithm.md)). + +Here is some "simple" [C](c.md) code that demonstrates perspective and draws a basic animated wireframe cuboid as ASCII in terminal: + +``` +#include + +#define SCREEN_W 50 // ASCII screen width +#define SCREEN_H 22 // ASCII screen height +#define LINE_POINTS 64 // how many points for drawing a line +#define FOV 8 // affects "field of view" +#define FRAMES 30 // how many animation frames to draw + +char screen[SCREEN_W * SCREEN_H]; + +void showScreen(void) +{ + for (int y = 0; y < SCREEN_H; ++y) + { + for (int x = 0; x < SCREEN_W; ++x) + putchar(screen[y * SCREEN_W + x]); + + putchar('\n'); + } +} + +void clearScreen(void) +{ + for (int i = 0; i < SCREEN_W * SCREEN_H; ++i) + screen[i] = ' '; +} + +// Draws point to 2D ASCII screen, [0,0] means center. +int drawPoint2D(int x, int y, char c) +{ + x = SCREEN_W / 2 + x; + y = SCREEN_H / 2 + y; + + if (x >= 0 && x < SCREEN_W && y >= 0 && y <= SCREEN_H) + screen[y * SCREEN_W + x] = c; +} + +// Divides coord. by distance taking "FOV" into account => perspective. +int perspective(int coord, int distance) +{ + return (FOV * coord) / distance; +} + +void drawPoint3D(int x, int y, int z, char c) +{ + if (z <= 0) + return; // at or beyond camera, don't draw + + drawPoint2D(perspective(x,z),perspective(y,z),c); +} + +int interpolate(int a, int b, int n) +{ + return a + ((b - a) * n) / LINE_POINTS; +} + +void drawLine3D(int x1, int y1, int z1, int x2, int y2, int z2, char c) +{ + for (int i = 0; i < LINE_POINTS; ++i) // draw a few points to form a line + drawPoint3D(interpolate(x1,x2,i),interpolate(y1,y2,i),interpolate(z1,z2,i),c); +} + +int main(void) +{ + int shiftX, shiftY, shiftZ; + + #define N 12 // side length + #define C '*' + + // cuboid points: + // X Y Z + #define PA -2 * N + shiftX, N + shiftY, N + shiftZ + #define PB 2 * N + shiftX, N + shiftY, N + shiftZ + #define PC 2 * N + shiftX, N + shiftY, 2 * N + shiftZ + #define PD -2 * N + shiftX, N + shiftY, 2 * N + shiftZ + #define PE -2 * N + shiftX, -N + shiftY, N + shiftZ + #define PF 2 * N + shiftX, -N + shiftY, N + shiftZ + #define PG 2 * N + shiftX, -N + shiftY, 2 * N + shiftZ + #define PH -2 * N + shiftX, -N + shiftY, 2 * N + shiftZ + + for (int i = 0; i < FRAMES; ++i) // render animation + { + clearScreen(); + + shiftX = -N + (i * 4 * N) / FRAMES; // animate + shiftY = -N / 3 + (i * N) / FRAMES; + shiftZ = 0; + + // bottom: + drawLine3D(PA,PB,C); drawLine3D(PB,PC,C); drawLine3D(PC,PD,C); drawLine3D(PD,PA,C); + + // top: + drawLine3D(PE,PF,C); drawLine3D(PF,PG,C); drawLine3D(PG,PH,C); drawLine3D(PH,PE,C); + + // sides: + drawLine3D(PA,PE,C); drawLine3D(PB,PF,C); drawLine3D(PC,PG,C); drawLine3D(PD,PH,C); + + drawPoint3D(PA,'A'); drawPoint3D(PB,'B'); // corners + drawPoint3D(PC,'C'); drawPoint3D(PD,'D'); + drawPoint3D(PE,'E'); drawPoint3D(PF,'F'); + drawPoint3D(PG,'G'); drawPoint3D(PH,'H'); + + showScreen(); + + puts("press key to animate"); + getchar(); + } + + return 0; +} +``` + +One frame of the animation should look like this: + +``` + E*******************************F + * * *** * + * ** *** * + * H***************G* * + * * * * + * * * * + * * * * + * * * * + * * * * + * * * * + * D***************C * + * ** *** * + * * * * + * * ** * + *** * * * + A*******************************B + +press key to animate +``` + ## Mainstream Realtime 3D You may have come here just to learn about the typical realtime 3D rendering used in today's [games](game.md) because aside from research and niche areas this kind of 3D is what we normally deal with in practice. This is what this section is about. -Nowadays this kind of 3D stands for a [GPU](gpu.md) accelerated 3D [rasterization](rasterization.md) done with rendering [APIs](api.md) such as [OpenGL](opengl.md), [Vulkan](vulkan.md), [Direct3D](d3d.md) or [Metal](metal.md) (the last two being [proprietary](proprietary.md) and therefore [shit](shit.md)) and higher level engines above them, e.g. [Godot](godot.md), [OpenSceneGraph](osg.md) etc. The methods seem to be evolving to some kind of rasterization/[pathtracing](pathtracing.md) hybrid, but rasterization is still the basis. +Nowadays "game 3D" means a [GPU](gpu.md) accelerated 3D [rasterization](rasterization.md) done with rendering [APIs](api.md) such as [OpenGL](opengl.md), [Vulkan](vulkan.md), [Direct3D](d3d.md) or [Metal](metal.md) (the last two being [proprietary](proprietary.md) and therefore [shit](shit.md)) and higher level engines above them, e.g. [Godot](godot.md), [OpenSceneGraph](osg.md) etc. The methods seem to be evolving to some kind of rasterization/[pathtracing](pathtracing.md) hybrid, but rasterization is still the basis. This mainstream rendering uses an [object order](object_order.md) approach (it blits 3D objects onto the screen rather than determining each pixel's color separately) and works on the principle of **triangle rasterization**, i.e. 3D models are composed of triangles (or higher polygons which are however eventually broken down into triangles) and these triangles are projected onto the screen according to the position of the virtual camera and laws of [perspective](perspective.md). Projecting the triangles means finding the 2D screen coordinates of each of the triangle's three vertices -- once we have thee coordinates, we draw (rasterize) the triangle to the screen just as a "normal" 2D triangle (well, with some asterisks). @@ -61,9 +209,9 @@ Furthermore things such as [z-buffering](z_buffer.md) (for determining correct o Also additionally there's a lot of [bloat](bloat.md) added in such as complex [screen space](screen_space.md) shaders, [pathtracing](pathtracing.md) (popularly known as *raytracing*), [megatexturing](megatexturing.md), [shadow rendering](shadow.md), [postprocessing](postprocessing.md), [compute shaders](compute_shader.md) etc. This may make it difficult to get into "modern" 3D rendering. Remember to [keep it simple](kiss.md). -On PCs the whole rendering process is hardware-accelerated with a [GPU](gpu.md) (graphics card). GPU is a special hardware capable of performing many operations in [parallel](parallelism.md) (as opposed to a [CPU](cpu.md) which mostly computes sequentially with low level of parallelism) -- this is great for graphics because we can for example perform mapping and drawing of many triangles at once, greatly increasing the speed of rendering ([FPS](fps.md)). However this hugely increases the [complexity](complexity.md) of the whole rendering system, we have to have a special [API](api.md) and [drivers](driver.md) for communication with the GPU and we have to upload data (3D models, textures, ...) to the GPU before we want to render them. [Debugging](debugging.md) gets a lot more difficult. +On PCs the whole rendering process is hardware-accelerated with a [GPU](gpu.md) (graphics card). GPU is a special hardware capable of performing many operations in [parallel](parallelism.md) (as opposed to a [CPU](cpu.md) which mostly computes sequentially with low level of parallelism) -- this is ideal for graphics because we can for example perform mapping and drawing of many triangles at once, greatly increasing the speed of rendering ([FPS](fps.md)). However this hugely increases the [complexity](complexity.md) of the whole rendering system, we have to have a special [API](api.md) and [drivers](driver.md) for communication with the GPU and we have to upload data (3D models, textures, ...) to the GPU before we want to render them. [Debugging](debugging.md) gets a lot more difficult. So again, this is [bloat](bloat.md), consider avoiding GPUs. -GPU nowadays are kind of general devices that can be used for more than just 3D rendering (e.g. [crypto](crypto.md) mining) and can no longer perform 3D rendering by themselves -- for this they have to be programmed. I.e. if we want to use a GPU for rendering, not only do we need a GPU but also some extra code. This code is provided by "systems" such as [OpenGL](opengl.md) or [Vulkan](vulkan.md) which consist of an [API](api.md) (an [interface](interface.md) we use from a [programming language](programming_language.md)) and the underlying implementation in a form of a [driver](driver.md) (e.g. [Mesa3D](mesa3d.md)). Any such rendering system has its own architecture and details of how it works, so we have to study it a bit if we want to use it. +GPUs nowadays are no longer just focusing on graphics, but are kind of a general device that can be used for more than just 3D rendering (e.g. [crypto](crypto.md) mining, training [AI](ai.md) etc.) and can no longer even perform 3D rendering completely by themselves -- for this they have to be programmed. I.e. if we want to use a GPU for rendering, not only do we need a GPU but also some extra code. This code is provided by "systems" such as [OpenGL](opengl.md) or [Vulkan](vulkan.md) which consist of an [API](api.md) (an [interface](interface.md) we use from a [programming language](programming_language.md)) and the underlying implementation in a form of a [driver](driver.md) (e.g. [Mesa3D](mesa3d.md)). Any such rendering system has its own architecture and details of how it works, so we have to study it a bit if we want to use it. The important part of a system such as OpenGL is its **rendering [pipeline](pipeline.md)**. Pipeline is the "path" through which data go through the rendering process. Each rendering system and even potentially each of its version may have a slightly different pipeline (but generally all mainstream pipelines somehow achieve rasterizing triangles, the difference is in details of how they achieve it). The pipeline consists of **stages** that follow one after another (e.g. the mentioned mapping of vertices and drawing of triangles constitute separate stages). A very important fact is that some (not all) of these stages are programmable with so called **[shaders](shader.md)**. A shader is a program written in a special language (e.g. [GLSL](glsl.md) for OpenGL) running on the GPU that processes the data in some stage of the pipeline (therefore we distinguish different types of shaders based on at which part of the pipeline they reside). In early GPUs stages were not programmable but they became so as to give a greater flexibility -- shaders allow us to implement all kinds of effects that would otherwise be impossible. diff --git a/productivity_cult.md b/productivity_cult.md index 061b26f..4a2d21a 100644 --- a/productivity_cult.md +++ b/productivity_cult.md @@ -1,5 +1,7 @@ # Productivity Cult +*"PRODUCE PRODUCE PRODUCE PRODUCE PRODUCE"* --[capitalism](capitalism.md) + Productivity cult is one of [modern](modern.md) [capitalist](capitalism.md) religions which praises human productivity above everything, even happiness, well being, sanity etc. Kids nowadays are all about "how to be more motivated and productive", they make daily checklists, analyze tables of their weekly performance, count how much time they spend taking a shit on the toilet, give up sleep to study some useless bullshit required by the current market fluctuation. Productivity cult is all about voluntarily making oneself a robot, a slave to the system that worships capital. The name of the cult itself [says a lot about it](name_is_important.md). While a name such as *efficiency* would probably be better, as efficiency means doing less work with the same result and therefore having more free time, it is not a surprise that capitalism has chosen the word *productivity*, i.e. producing more which means working more, e.g. for the price of free time and mental health.