You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

8.1 KiB

Raycasting

In computer graphics raycasting refers to a rendering technique in which we determine which parts of the scene should be drawn according to which parts of the scene are hit by rays cast from the camera. This is based on the idea that we can trace rays of light that enter the camera by going backwards, i.e. starting from the camera towards the parts of the scene that reflected the light. The term raycasting specifically has two main meanings:

  • 3D raycasting: Algorithm that works the same as raytracing but without recursion. I.e. raycasting is simpler than raytracing and only casts primary rays (those originating from the camera), hence, unlike in raytracing, there are no shadows, reflections and refractions. Raytracing is the extension of raycasting.
  • 2D raycasting: Technique for rendering so called "pseudo3D" (primitive 3D) graphics, probably best known from the old game Wolf3D (predecessor of Doom). The principle of casting the rays is the same but we only limit ourselves to casting the rays within a single 2 dimensional plane and render the environemnt by columns (unlike the 3D variant that casts rays and renders by individual pixels).

2D Raycasting

We have an official LRS library for advanced 2D raycasting: raycastlib! And also a game built on top of it: Anarch.

2D raycasting can be used to relatively easily render "3Dish" looking environments (this is commonly labeled "pseudo3D"), mostly some kind of right-angled labyrinth. There are limitations such as the inability for the camera to tilt up and down (which can nevertheless be faked with shearing). It used to be popular in very old games but can still be used nowadays for "retro" looking games, games for very weak hardware (e.g. embedded), in demos etc. It is pretty cool, very suckless rendering method.

TODO: image

The method is called 2D because even though the rendered picture looks like a 3D view, the representation of the world we are rendering is 2 dimensional (usually a grid, a top-down plan of the environment with cells of either empty space or walls) and the casting of the rays is performed in this 2D space -- unlike with the 3D raycasting which really does cast rays in fully 3D environments. Also unlike with the 3D version which casts one ray per each rendered pixel (x * y rays per frame), 2D raycasting only casts one ray per rendered column (x rays per frame) which actually, compared to the 3D version, drastically reduces the number of rays cast and makes this method fast enough for real time rendering even using software_rendering (without a GPU).

The principle is following: for each column we want to render we cast a ray from the camera and find out which wall in our 2D world it hits first and at what distance -- according to the distance we use perspective to calculate how tall the wall columns should look from the camera's point of view, and we render the column. Tracing the ray through the 2D grid representing the environment can be done relatively efficiently with algorithms normally used for line rasterization. There is another advantage for weak-hardware computers: we can easily use 2D raycasting without a framebuffer (without double_buffering) because we can render each frame top-to-bottom left-to-right without overwriting any pixels (as we simply cast the rays from left to right and then draw each column top-to-bottom). And of course, it can be implemented using fixed point (integers only).

The classic version of 2D raycasting -- as seen in the early 90s games -- only renders walls with textures; floors and ceilings are untextured and have a solid color. The walls all have the same height, the floor and ceiling also have the same height in the whole environment. In the walls there can be sliding doors. 2D sprites (billboards) can be used with raycasting to add items or characters in the environment -- for correct rendering here we usually need a 1 dimensional z-buffer in which we write distances to walls to correctly draw sprites that are e.g. partially behind a corner. However we can extend raycasting to allow levels with different heights of walls, floor and ceiling, we can add floor and ceiling texturing and, in theory, probably also use different level geometry than a square grid (however at this point it would be worth considering if e.g. BSP rendering wouldn't be better).

Implementation

The core element to implement is the code for casting rays, i.e. given the square plan of the environment (e.g. game level), in which each square is either empty or a wall (which can possibly be of different types, to allow e.g. different textures), we want to write a function that for any ray (defined by its start position and direction) returns the information about the first wall it hits. This information most importantly includes the distance of the hit, but can also include additional things such as the type of the wall or its direction (so that we can shade differently facing walls with different brightness for better realism). The environment is normally represented as a 2 dimensional array, but we can also use e.g. a function that procedurally generates infinite levels. For the algorithm for tracing the ray in the grid we may actually use some kind of line rasterization algorithm, e.g. the DDA algorithm (tracing a line through a grid is analogous to drawing a line in a pixel grid). This can all be implemented with fixed point, i.e. integer only! No need for floating point.

Note on distance calculation and distortion: When computing the distance of ray hit from the camera, we usually DO NOT want to use the Euclidean distance of that point from the camera position (as is tempting) -- that would create a so called fish eye effect, i.e. looking straight into a perpendicular wall would make the wall looked warped/bowled (as the part of the wall in the middle of the screen is actually closer to the camera position so it would, by perspective, look bigger). For non-distorted rendering we have to compute the perpendicular distance of the hit point from the camera plane -- we can see the camera plane as a "canvas" onto which we project the scene, in 2D it is a line (unlike in 3D where it really is a plane) in front of the camera at a constant distance (usually conveniently chosen to be 1) from the camera position whose direction is perpendicular to the direction the camera is facing. The good news is that with a little trick this distance can be computed even more efficiently than Euclidean distance, as we don't need to compute a square root! Instead we can utilize the similarity of triangles. Consider the following situation:

                d   
               /    .x
              /  r.'/|
      '-._   /  ,' / |
      P   '-c_.'  /  |
           /,'|-.e   |
          /'  |      |
         V----a------b
              
              

In the above V is the position of the camera (viewer) which is facing towards the point d, P is the camera plane perpendicular to Vd at the distance 1 from V. Ray r is cast from the camera and hits the point x. The length of the line r is the Euclidean distance, however we want to find out the distance ex = cd, which is perpendicular to P. There are two similar triangles: Vca and Vdb; from this it follows that 1 / Va = (1 + cd) / Vb, from which we derive that cd = Vb / Va - 1. This can be used to calculate the perpendicular distance just from the ratio of the distances along principal axes (i.e. along axis X or Y). However watch out for the case when Va = Vb = 0 to not divide by zero! In such case use the other principal axis (Y).

{ I hope the above is correct lol. ~drummyfish}

TODO: code