master
Miloslav Ciz 2 years ago
parent 93ed05f715
commit 01be9b8af5

@ -0,0 +1,7 @@
# Normalization
Normalization is a term that can mean slightly different things but generally it either refers to adjusting a set of values to some desired range by multiplying or dividing each of the values by some predetermined number, or to converting some data or expression into a unified format. The idea is to "tame" possibly very wildly differing values that we can encounter "in the wild" into something more "normal" that we can better work with. The following are some specific meanings of the term depending on context:
- **[vector](vector.md) normalization**: Making given vector into a unit vector by dividing all its components by the length of the vector, i.e. we keep the direction of the vector the same but force its length to be exactly 1.
- **signal normalization**: Adjusting the range of the signal to a desired range, for example with audio or images in which samples can range from -1 to 1 we may want to divide all the samples by the maximum of absolute values of all the samples which will stretch the signal so that the peak exactly fits the range: this will fully utilize the range (e.g. increase contrast in images) without cutting the signal off.
- **[URI](uri.md) normalization**: Converting URI into a unified format (e.g. `HTTP://www.MYSITE.COM:80/index.html` to `http://www.mysite.com`).

@ -7,11 +7,44 @@ In [computer graphics](graphics.md) raycasting refers to a rendering technique i
## 2D Raycasting
We have an official [LRS](lrs.md) library for advanced 2D raycasting: [raycastlib](raycastlib.md)! And also a game built on top of it: [Anarch](anarch.md).
{ We have an official [LRS](lrs.md) library for advanced 2D raycasting: [raycastlib](raycastlib.md)! And also a game built on top of it: [Anarch](anarch.md). ~drummyfish }
2D raycasting can be used to relatively easily render "3Dish" looking environments (this is commonly labeled "[pseudo3D](pseudo3D.md)"), mostly some kind of right-angled labyrinth. There are limitations such as the inability for the camera to tilt up and down (which can nevertheless be faked with shearing). It used to be popular in very old games but can still be used nowadays for "retro" looking games, games for very weak hardware (e.g. [embedded](embedded.md)), in [demos](demoscene.md) etc. It is pretty cool, very [suckless](suckless.md) rendering method.
2D raycasting can be used to relatively easily render "3Dish" looking environments (commonly labeled "[pseudo 3D](pseudo3D.md)"), mostly some kind of right-angled labyrinth. There are limitations such as the inability for the camera to tilt up and down (which can nevertheless be faked with shearing). It used to be popular in very old games but can still be used nowadays for "retro" looking games, games for very weak hardware (e.g. [embedded](embedded.md)), in [demos](demoscene.md) etc. It is pretty cool, very [suckless](suckless.md) rendering method.
TODO: image
```
................................................................................
/////////.......................................................................
///////////////.................................................................
//////////////////////..........................................................
//////////////////////..........................................................
//////////////////////..........................................................
//////////////////////..........................................................
//////////////////////..............................X//.........................
//////////////////////...........................XXXX//////////............XXXXX
//////////////////////.......................XXXXXXXX////////////////XXXXXXXXXXX
//////////////////////.......................XXXXXXXX////////////////XXXXXXXXXXX
/////////////////////////////////////////....XXXXXXXX////////////////XXXXXXXXXXX
/////////////////////////////////////////////XXXXXXXX////////////////XXXXXXXXXXX
/////////////////////////////////////////////XXXXXXXX////////////////XXXXXXXXXXX
/////////////////////////////////////////////XXXXXXXX////////////////XXXXXXXXXXX
/////////////////////////////////////////////XXXXXXXX////////////////XXXXXXXXXXX
/////////////////////////////////////////////XXXXXXXX////////////////XXXXXXXXXXX
/////////////////////////////////////////....XXXXXXXX////////////////XXXXXXXXXXX
//////////////////////.......................XXXXXXXX////////////////XXXXXXXXXXX
//////////////////////.......................XXXXXXXX////////////////XXXXXXXXXXX
//////////////////////...........................XXXX//////////............XXXXX
//////////////////////..............................X//.........................
//////////////////////..........................................................
//////////////////////..........................................................
//////////////////////..........................................................
//////////////////////..........................................................
///////////////.................................................................
/////////.......................................................................
................................................................................
................................................................................
```
*image rendered with [raycastlib](raycastlib.md)*
The method is called *2D* because even though the rendered picture looks like a 3D view, the representation of the world we are rendering is 2 dimensional (usually a grid, a top-down plan of the environment with cells of either empty space or walls) and the casting of the rays is performed in this 2D space -- unlike with the 3D raycasting which really does cast rays in fully 3D environments. Also unlike with the 3D version which casts one ray per each rendered pixel (x * y rays per frame), 2D raycasting only casts **one ray per rendered column** (x rays per frame) which actually, compared to the 3D version, drastically reduces the number of rays cast and makes this method **fast enough for [real time](real_time.md)** rendering even using [software_rendering](sw_rendering.md) (without a GPU).
@ -21,7 +54,7 @@ The classic version of 2D raycasting -- as seen in the early 90s games -- only r
### Implementation
The core element to implement is the code for casting rays, i.e. given the square plan of the environment (e.g. game level), in which each square is either empty or a wall (which can possibly be of different types, to allow e.g. different textures), we want to write a function that for any ray (defined by its start position and direction) returns the information about the first wall it hits. This information most importantly includes the distance of the hit, but can also include additional things such as the type of the wall or its direction (so that we can [shade](shading.md) differently facing walls with different brightness for better realism). The environment is normally represented as a 2 dimensional [array](array.md), but we can also use e.g. a function that [procedurally](procgen.md) generates infinite levels. For the algorithm for tracing the ray in the grid we may actually use some kind of line [rasterization](rasterization.md) algorithm, e.g. the [DDA](dda.md) algorithm (tracing a line through a grid is analogous to drawing a line in a pixel grid). This can all be implemented with [fixed point](fixed_point.md), i.e. integer only! No need for [floating point](float.md).
The core element to implement is the code for casting rays, i.e. given the square plan of the environment (e.g. game level), in which each square is either empty or a wall (which can possibly be of different types, to allow e.g. different textures), we want to write a function that for any ray (defined by its start position and direction) returns the information about the first wall it hits. This information most importantly includes the distance of the hit, but can also include additional things such as the type of the wall, texturing coordinate or its direction (so that we can [shade](shading.md) differently facing walls with different brightness for better realism). The environment is normally represented as a 2 dimensional [array](array.md), but instead of explicit data we can also use e.g. a function that [procedurally](procgen.md) generates infinite levels (i.e. we have a function that for given square coordinates computes what kind of square it is). As for the algorithm for tracing the ray in the grid we may actually use some kind of line [rasterization](rasterization.md) algorithm, e.g. the **[DDA](dda.md) algorithm** (tracing a line through a grid is analogous to drawing a line in a pixel grid). This can all be implemented with [fixed point](fixed_point.md), i.e. integer only! No need for [floating point](float.md).
**Note on distance calculation and distortion**: When computing the distance of ray hit from the camera, we usually DO NOT want to use the [Euclidean](euclidean.md) distance of that point from the camera position (as is tempting) -- that would create a so called fish eye effect, i.e. looking straight into a perpendicular wall would make the wall looked warped/bowled (as the part of the wall in the middle of the screen is actually closer to the camera position so it would, by perspective, look bigger). For non-distorted rendering we have to compute the perpendicular distance of the hit point from the camera plane -- we can see the camera plane as a "canvas" onto which we project the scene, in 2D it is a line (unlike in 3D where it really is a plane) in front of the camera at a constant distance (usually conveniently chosen to be 1) from the camera position whose direction is perpendicular to the direction the camera is facing. The good news is that with a little trick this distance can be computed even more efficiently than Euclidean distance, as we don't need to compute a square root! Instead we can utilize the similarity of triangles. Consider the following situation:

Loading…
Cancel
Save