less_retarded_wiki/graphics.md
Miloslav Ciz e54040171b Update
2022-05-21 23:20:09 +02:00

32 lines
5.6 KiB
Markdown

# Computer Graphics
Computer graphics (CG or just graphics) is a field of [computer science](compsci.md) that deals with visual information. The field doesn't have strict boundaries and can blend and overlap with other possibly separate topics such as physical simulations, multimedia and machine learning. It usually deals with creating or analyzing 2D and 3D images and as such CG is used in data visualization, [game](game.md) development, [virtual reality](vr.md), [optical character recognition](ocr.md) and even astrophysics or medicine.
We can divide computer graphics in different ways, traditionally e.g.:
- by direction:
- **[rendering](rendering.md)**: Creating images.
- **[computer vision](computer_vision.md)**: Extracting information from existing images.
- by basic elements:
- **raster**: Deals with images composed of a uniform grid of points called [pixels](pixel.md) (in 2D) or [voxels](voxel.md) (in 3D).
- **vector**: Deals with images composed of geometrical primitives such as curves or triangles.
- by dimension:
- **[2D](2d.md)**: Deals with images of a 2D plane.
- **[3D](3d.md)**: Deals with images that capture three dimensional space.
- by speed:
- **[real time](real_time.md)**: Trying to work with images in real time, e.g. being able to produce or analyze 60 frames per second.
- **offline**: Processes or creates images over longer time-spans, e.g. hours or days.
Since the 90s computers started using a dedicated hardware to accelerate graphics: so called [graphics processing units](gpu.md) (GPUs). These have allowed rendering of high quality images in high [FPS](fps.md), and due to the entertainment and media industry (especially gaming), GPUs have been pushed towards greater performance each year. Nowadays they are one of the most consumerist [hardware](hardware.md), also due to the emergence of general purpose computations being moved to GPUs (GPGPU) and lately the mining of [cryptocurrencies](crypto.md). Most lazy programs dealing with graphics nowadays simply expect and require a GPU, which creates a bad [dependency](dependency.md). At [LRS](lrs.md) we try to prefer the [suckless](suckless.md) **[software rendering](sw_rendering.md)**, i.e. rendering on the [CPU](cpu.md), without GPU, or at least offer this as an option in case GPU isn't available. This many times leads us towards the adventure of using old and forgotten algorithms used in times before GPUs.
## 3D Graphics
3D graphics is a big part of CG but is a lot more complicated than 2D. It tries to achieve **realism** through the use of [perspective](perspective.md), i.e. looking at least a bit like what we see in the real world. Due to this 3D can be though of as **simulating the behavior of light**. There exists so called *rendering equation* that describes how light behaves ideally, and 3D computer graphics is all about trying to approximate the solutions of this equation.
Because 3D is not very easy, there exist many **3D engines** and libraries that you'll probably want to use. These engines/libraries work on different levels of abstraction: the lowest ones, such as [OpenGL](opengl.md) and [Vulkan](vulkan.md), offer a portable API for communicating with the GPU that lets you quickly draw triangles and write small programs that run in parallel on the GPU -- so called [shaders](shader.md). The higher level, such as [OpenSceneGraph](osg.md), work with [abstraction](abstraction.md) such as that of **camera** and **scene** into which we place specific 3D objects such as models and lights (the scene is many times represented as a hierarchical graph of objects that can be "attached" to other objects).
There is a tiny [suckless](suckless.md)/[LRS](lrs.md) library for real-time 3D: [small3dlib](small3dlib.md). It uses software rendering (no GPU) and can be used for simple 3D programs that can run even on low-spec embedded devices. [TinyGL](tinygl.md) is a similar software-rendering library that implements a subset of [OpenGL](opengl.md).
**Real-time** 3D typically uses an **object-order** rendering, i.e. iterating over objects in the scene and drawing them onto the screen (i.e. we draw object by object). This is a fast approach but has disadvantages such as (usually) needing a memory inefficient [z-buffer](z_buffer.md) to not overwrite closer objects with more distant ones. It is also pretty difficult to implement effects such as shadows or reflections in object-order rendering. The 3D models used in real-time 3D are practically always made of **triangles** (or other polygons) because the established GPU pipeline works on the principle of drawing polygons.
**Offline rendering** (non-real-time, e.g. 3D movies) on the other hand mostly uses **image-order** algorithms which go pixel by pixel and for each one determine what color the pixel should have. This is basically done by casting a ray from the camera's position through the "pixel" position and calculating which objects in the scene get hit by the ray; this then determines the color of the pixel. This more accurately models how rays of light behave in real life (even though in real life the rays go the opposite way: from lights to the camera, but this is extremely inefficient to simulate). The advantage of this process is a much higher realism and the implementation simplicity of many effects like shadows, reflections and refractions, and also the possibility of having other than polygonal 3D models (in fact smooth, mathematically described shapes are normally much easier to check ray intersections with). Algorithms in this category include [ray tracing](ray_tracing.md) or [path tracing](path_tracing.md). In recent years we've seen these methods brought, in a limited way, to real-time graphics on the high end GPUs.