less_retarded_wiki/raycasting.md

514 lines
33 KiB
Markdown
Raw Permalink Normal View History

# Raycasting
2024-09-12 20:17:25 +02:00
In [computer graphics](graphics.md) raycasting refers to a rendering technique in which we determine which parts of the scene should be drawn according to which parts of the scene are hit by rays cast from the camera. We may also say it's a simpler version of the popular algorithm called **[raytracing](raytracing.md)**, or perhaps that raycasting is the initial idea of rendering via casting rays, which subsequently inspires many improvements and extensions like recursive rays (raytracing), [Monte Carlo](monte_carlo.md) sampling (pathtracing), using cones instead of lines (conetracting) etc. The whole idea is based on the observation that we can trace rays of light that enter the camera by going BACKWARDS, i.e. instead of tracing light from light sources we rather start from the camera and go towards the parts of the scene that reflected the light (by which we ensure we are only considering the RELEVANT paths of light that actually end up hitting the camera) -- that is we are asking the question "in order for this screen pixel to light up, where would the light be coming from?", and then computing the answer to the question. A simplified way to quickly imagine what's going on is therefore to think of drawing the scene via "scanning" it with some kind of laser beam originating from the camera -- of course we do this [mathematically](math.md), using [analytic geometry](analytic_geometry.md), i.e. finding intersections of the rays with geometric shapes by solving algebraic equations. Despite perhaps sounding intimidating at first, raycasting is one of the [simplest](minimalism.md) rendering methods, and for that it is also quite elegant -- [we](lrs.md) definitely do recommend it.
2024-09-07 16:47:08 +02:00
Raycasting is an **image order** rendering method, meaning that we iterate over the pixels of the screen and for each determine its [color](color.md) (as opposed to object order methods that iterate over 3D objects that are then "pasted" to the screen). I.e. the image can be drawn in any order -- let's say from top left to bottom right -- and without drawing any pixel more than once or leaving out any. This is advantageous as we may leave out [double buffering](double_buffering.md) and save A LOT of memory on the extra frame buffer. We may also utilize for example [frameless rendering](frameless.md). All these attributes are why we consider raycasting so nice.
Now it's important to mention that among graphics programmers the term raycasting has come to have two meanings:
- **[3D](3d.md) raycasting**: Casts a ray for each pixels of the screen. I.e. this works the same as traditional [raytracing](raytracing.md) but without [recursion](recursion.md). I.e. raycasting is simpler than raytracing and only casts so called primary rays (those originating from the camera), hence, unlike in raytracing, there are no shadows, reflections and refractions. So raytracing is the extension of raycasting. Similarly to raytracing this kind of raycasting is considered slow as it requires casting screen width times screen height rays (one for each pixel) every frame, but it must be said this is a lot due to fact that [GPUs](gpu.md) are built for different kind of rendering, so raycasting many times ends up being implemented in [software](sw_rendering.md). As an image order method there is actually a good potential for raycasting to be [parallelized](parallelism.md) (the rays may in theory be cast simultaneously), so with special hardware raycasting could be quite fast. With nice tricks and [optimizations](optimization.md) (hierarchical accelerating structures, subsampling, mathematically simple shapes, ...) however even software raycasting may be used for real time rendering.
2024-10-12 16:21:03 +02:00
- **[2D](2d.md) raycasting** (also called 1D raycasting): Technique for rendering so called "[pseudo3D](pseudo3d.md)" (primitive 3D) graphics, probably best known from the old [game](game.md) [Wolf3D](wolf3d.md) (predecessor of [Doom](doom.md)), also used in [Anarch](anarch.md). The principle of casting the rays is still the same but we only limit ourselves to casting them within a single 2 dimensional plane and render the environemnt by columns (unlike the 3D variant that casts rays and renders by individual pixels). I.e. we are [cheating](cheating.md) a bit -- instead of casting a ray for each screen pixel we only cast a ray for each screen COLUMN and so we reduce the full 3D raycasting into a fewer dimensional one and use clever tricks to still keep it looking "3D". Of course this introduces some limitation (like not being able to rotate camera up and down) but makes the method MUCH faster -- so fast it was used for real time "3D" rendering on the early 90s computers. Additionally this method may be greatly improved to make it yet more awesome -- for example it's possible to add some verticality to the environments by casting a few extra rays for each column.
2024-09-07 16:47:08 +02:00
**If it's so awesome, why isn't it used in the [mainstream](mainstream.md)?** As always because mainstream is [shit](shit.md) -- it rather settled on the more ugly but business friendly rasterization of polygonal models -- probably the main reason is that with rasterization companies can split people into graphics programmers, who just program a rasterizer, and graphic artists, who can make whatever 3D model out of polygons without caring about technical details. Raycasting is more of an [art](art.md) requiring individual tweaks to each program and considering the technical side when making the 3D models. [Capitalism](capitalism.md) does ugly things to produce huge quantities of products to be [consumed](consumerism.md), it never cares about efficiency, elegance or beauty of art.
## 2D Raycasting
{ We have an official [LRS](lrs.md) library for advanced 2D raycasting: [raycastlib](raycastlib.md)! And also a game built on top of it: [Anarch](anarch.md). ~drummyfish }
2023-10-28 14:40:53 +02:00
{ Also there is a very cool oldschool book that goes through programming a whole raycasting game engine in C, called *Tricks of The Game Programming Gurus*, check it out! ~drummyfish }
2024-10-14 20:53:01 +02:00
2D raycasting can be used to relatively easily render "3Dish" looking environments (commonly labeled "[pseudo 3D](pseudo3d.md)"), mostly some kind of right-angled labyrinth. There are limitations such as the inability for the camera to tilt up and down (which can nevertheless be faked with shearing). It used to be popular in very old games but can still be used nowadays for "retro" looking games, games for very weak hardware (e.g. [embedded](embedded.md)), in [demos](demoscene.md) etc. It is pretty cool, very [suckless](suckless.md) rendering method.
```
....................................................................................................
....................................................................................................
###.................................................................................................
#########...........................................................................................
#########...........................................................................................
#########...........................................................................................
#########...........................................................................................
#########.................///######..................................../#...........................
#########..............//////############.....................//////////###.........................
#########..............//////############............///////////////////####............////////////
#########......///#####//////############.........//////////////////////####////////////////////////
###############///#####//////############.........//////////////////////####////////////////////////
###############///#####//////############//#####////////////////////////####////////////////////////
###############///#####//////############//#####////////////////////////####////////////////////////
###############///#####//////############//#####////////////////////////####////////////////////////
###############///#####//////############//#####////////////////////////####////////////////////////
###############///#####//////############//#####////////////////////////####////////////////////////
###############///#####//////############//#####////////////////////////####////////////////////////
###############///#####//////############//#####////////////////////////####////////////////////////
###############///#####//////############.........//////////////////////####////////////////////////
#########......///#####//////############.........//////////////////////####////////////////////////
#########..............//////############............///////////////////####............////////////
#########..............//////############.....................//////////###.........................
#########.................///######..................................../#...........................
#########...........................................................................................
#########...........................................................................................
#########...........................................................................................
#########...........................................................................................
###.................................................................................................
....................................................................................................
....................................................................................................
```
*raycasted view, rendered by the example below*
2023-12-28 18:17:00 +01:00
The method is called *2D* because even though the rendered picture looks like a 3D view, the rays we are casting are 2 dimensional and the representation of the world we are rendering is also usually 2 dimensional (typically a grid, a top-down plan of the environment with cells of either empty space or walls) and the casting of the rays is performed in this 2D space -- unlike with the 3D raycasting which really does cast 3D rays and uses "fully 3D" environment representations. Also unlike with the 3D version which casts one ray per each rendered pixel (x * y rays per frame), 2D raycasting only casts **one ray per rendered column** (x rays per frame) which actually, compared to the 3D version, drastically reduces the number of rays cast and makes this method **fast enough for [real time](real_time.md)** rendering even using [software_rendering](sw_rendering.md) (without a GPU).
2023-12-28 18:17:00 +01:00
SIDENOTE: The distinction between 2D and 3D raycasting may get [fuzzy](fuzzy.md), the transition may be gradual. It is possible to have "real 3D" world (with some limitations) but draw it using 2D raycasting, [Anarch](anarch.md) does something like that -- it uses 2D raycasting for rendering but player and projectiles have full X, Y and Z coordinates. Also consider for example performing 2D raycasting but having 3 layers of the 2D world, allowing for 3 different height levels; now we've added the extra Z dimension to 2D raycasting, though this dimension is small (Z coordinate of world cell can only be 0, 1 or 2), however we will now be casting 3 rays for each column and are getting closer to the full 3D raycasting. This is just to show that as with everything we can usually do "something in between".
2023-12-28 18:17:00 +01:00
Back to pure 2D raycasting; the principle is following: for each column we want to render we cast a ray from the camera and find out which wall in our 2D world it hits first and at what distance -- according to the distance we use [perspective](perspective.md) to calculate how tall the wall columns should look from the camera's point of view, and we render the column. Tracing the ray through the 2D grid representing the environment can be done relatively efficiently with algorithms normally used for line rasterization. There is another advantage for weak-hardware computers: we can easily use 2D raycasting **without a [framebuffer](framebuffer.md)** (without [double_buffering](double_buffering.md)) because we can render each frame top-to-bottom left-to-right without overwriting any pixels (as we simply cast the rays from left to right and then draw each column top-to-bottom). And of course, it can be implemented using [fixed point](fixed_point.md) (integers only).
The classic version of 2D raycasting -- as seen in the early 90s games -- only renders walls with [textures](texture.md); floors and ceilings are untextured and have a solid color. The walls all have the same height, the floor and ceiling also have the same height in the whole environment. In the walls there can be sliding doors. 2D sprites ([billboards](billboard.md)) can be used with raycasting to add items or characters in the environment -- for correct rendering here we usually need a 1 dimensional [z-buffer](z_buffer.md) in which we write distances to walls to correctly draw sprites that are e.g. partially behind a corner. However we can **extend** raycasting to allow levels with different heights of walls, floor and ceiling, we can add floor and ceiling texturing and also use different level geometry than a square grid (however at this point it would be worth considering if e.g. [BSP](bsp.md) or [portal rendering](portal_rendering.md) wouldn't be better). An idea that might spawn from this is for example using [signed distance field](sdf.md) function to define an environment and then use 2D [raymarching](raymarching.md) to iteratively find intersections of the ray with the environment in the same way we are stepping through cells in the 2D raycasting described above.
### Implementation
The core element to implement is the code for casting rays, i.e. given the square plan of the environment (e.g. game level), in which each square is either empty or a wall (which can possibly be of different types, to allow e.g. different textures), we want to write a function that for any ray (defined by its start position and direction) returns the information about the first wall it hits. This information most importantly includes the distance of the hit, but can also include additional things such as the type of the wall, texturing coordinate or its direction (so that we can [shade](shading.md) differently facing walls with different brightness for better realism). The environment is normally represented as a 2 dimensional [array](array.md), but instead of explicit data we can also use e.g. a function that [procedurally](procgen.md) generates infinite levels (i.e. we have a function that for given square coordinates computes what kind of square it is). As for the algorithm for tracing the ray in the grid we may actually use some kind of line [rasterization](rasterization.md) algorithm, e.g. the **[DDA](dda.md) algorithm** (tracing a line through a grid is analogous to drawing a line in a pixel grid). This can all be implemented with [fixed point](fixed_point.md), i.e. integer only! No need for [floating point](float.md).
2024-07-30 22:52:22 +02:00
**Note on distance calculation and distortion**: When computing the distance of ray hit from the camera, we usually DO NOT want to use the [Euclidean](euclidean.md) distance of that point from the camera position (as is tempting) -- that would create a so called fish eye effect, i.e. looking straight into a perpendicular wall would make the wall look warped/bowled (as the part of the wall in the middle of the screen is actually closer to the camera position than the wall sides off the center, so it would by perspective laws look bigger). For non-distorted rendering we have to compute a distance that's perpendicular to the camera projection plane -- we can see the camera plane as a "canvas" onto which we project the scene (it's actually the flat computer screen that determines that we shall use such a flat projection plane), in 2D "top-down view" it is a line segment (unlike in 3D where it really is a plane, a rectangle) at a certain distance from the camera (usually conveniently chosen to be e.g. 1) whose direction is perpendicular to the direction the camera is facing. The good news is that with a little trick this distance can be computed even more efficiently than Euclidean distance, as we don't need to compute a square root! Instead we can utilize the similarity of triangles. Consider the following situation:
2024-08-31 14:44:45 +02:00
```
I-_
/ '-X
/ r.'/|
'-._ / ,' / |
p '-C_.' / |
1/,'|-./ |
/' | / |
V-._-A/-----B
'J
2024-08-31 14:44:45 +02:00
```
In the above *V* is the position of the camera (viewer) which is facing towards the point *I*, *p* is the camera plane perpendicular to *VI* at the distance 1 from *V*. Ray *r* is cast from the camera and hits the point *X*. The length of the line *r* is the Euclidean distance, however we want to find out the distance *JX = VI*, which is perpendicular to *p*. There are two similar triangles: *VCA* and *VIB*; from this it follows that *1 / VA = VI / VB*, from which we derive that *JX = VB / VA*. We can therefore calculate the perpendicular distance just from the ratio of the distances along one principal axis (X or Y). However watch out for the case when *VA = VB = 0* to not divide by zero! In such case use the other principal axis (Y).
2024-07-30 22:52:22 +02:00
NOTE: Take your time to think about the above and how the projection works, it's how it also works in cameras and our own eyes (though our eyes actually have a curved projection plane -- the eyeball's retina; our brain is just used to seeing with the fish eye effect). Seeing this will make you get to the fundamental understanding of how 3D projection works. Try to observe for example the following: if we were using an ideally curved monitor (one that would would be part of a sphere in whose center we sit), we would rather want to use the Euclidean distance for rendering, i.e. a curved projection plane that would creates the fish eye effect, because the effect would then be canceled out by the curvature of the monitor in real life (the walls at the sides would be rendered smaller but then they would be made bigger again to our eyes by the curved monitor that on the sides gets closer). Ultimately our goal is to create a program that TOGETHER with the rendering device (which we suppose to be a flat monitor) will shoot rays of light in a way that real life environment would. That's basically it. That's also why using curved monitors for 3D games that assume flat monitor is capitalist retardation.
Here is a complete **[C](c.md) example** that uses only [fixed point](fixed_point.md) with the exception of the stdlib [sin](sin.md)/[cos](cos.md) functions, for simplicity's sake (these can easily be replaced by custom fixed point implementation):
```
#include <stdio.h>
#include <math.h> // for simplicity we'll use float sin, cos from stdlib
2024-06-10 10:07:07 +02:00
#define U 1024 // fixed point unit
#define LEVEL_SIZE 16 // level resolution
#define SCREEN_W 100
#define SCREEN_H 31
int wallHeight[SCREEN_W];
int wallDir[SCREEN_W];
int perspective(int distance)
{
if (distance <= 0)
distance = 1;
return (SCREEN_H * U) / distance;
}
unsigned char level[LEVEL_SIZE * LEVEL_SIZE] =
{
#define E 1, // wall
#define l 0, // floor
l l l l E l l l l l l l l l E E
l E l l E E E l l l l l E l l E
l l l l l l l l l l l l l l l l
l E l l E l E l E l E l E l l l
l l l l E l l l l l l l l l E l
l l l l E l l l l l l l l l E l
l E E l E l l l l l l l l l l l
l E E l E l l l l l l l l l l l
l E l l l l l l l l l l l l l E
l E l l E l l l l l l l l E l l
l E l l E l l l l l l l l E l l
l E l l l l E E E l l l l l l l
l E E l E l l l l l E E E l l E
l E E l E l l l l l E l l l E E
l l l l l l E E E E E l l E E E
l l E l l l l l l l l l E E E E
#undef E
#undef l
};
unsigned char getTile(int x, int y)
{
if (x < 0 || y < 0 || x >= LEVEL_SIZE || y >= LEVEL_SIZE)
return 1;
return level[y * LEVEL_SIZE + x];
}
// returns perpend. distance to hit and wall direction (0 or 1) in dir
int castRay(int rayX, int rayY, int rayDx, int rayDy, int *dir)
{
int tileX = rayX / U,
tileY = rayY / U,
addX = 1, addY = 1;
// we'll convert all cases to tracing in +x, +y direction
*dir = 0;
if (rayDx == 0)
rayDx = 1;
else if (rayDx < 0)
{
rayDx *= -1;
addX = -1;
rayX = (tileX + 1) * U - rayX % U;
}
if (rayDy == 0)
rayDy = 1;
else if (rayDy < 0)
{
rayDy *= -1;
addY = -1;
rayY = (tileY + 1) * U - rayY % U;
}
int origX = rayX,
origY = rayY;
for (int i = 0; i < 20; ++i) // trace at most 20 squares
{
int px = rayX % U, // x pos. within current square
py = rayY % U,
tmp;
if (py > ((rayDy * (px - U)) / rayDx) + U)
{
tileY += addY; // step up
rayY = ((rayY / U) + 1) * U;
tmp = rayX / U;
rayX += (rayDx * (U - py)) / rayDy;
if (rayX / U != tmp) // don't cross the border due to round. error
rayX = (tmp + 1) * U - 1;
*dir = 0;
}
else
{
tileX += addX; // step right
rayX = ((rayX / U) + 1) * U;
tmp = rayY / U;
rayY += (rayDy * (U - px)) / rayDx;
if (rayY / U != tmp)
rayY = (tmp + 1) * U - 1;
*dir = 1;
}
if (getTile(tileX,tileY)) // hit?
{
px = rayX - origX;
py = rayY - origY;
// get the perpend dist. to camera plane:
return (px > py) ? ((px * U) / rayDx) : ((py * U) / rayDy);
2024-08-31 14:44:45 +02:00
// the following would give the fish eye effect instead
// return sqrt(px * px + py * py);
}
}
return 100 * U; // no hit found
}
void drawScreen(void)
{
for (int y = 0; y < SCREEN_H; ++y)
{
int lineY = y - SCREEN_H / 2;
lineY = lineY >= 0 ? lineY : (-1 * lineY);
for (int x = 0; x < SCREEN_W; ++x)
putchar((lineY >= wallHeight[x]) ? '.' : (wallDir[x] ? '/' : '#'));
putchar('\n');
}
}
int main(void)
{
int camX = 10 * U + U / 4,
camY = 9 * U + U / 2,
camAngle = 600, // U => full angle (2 * pi)
quit = 0;
while (!quit)
{
int forwX = cos(2 * 3.14 * camAngle) * U,
forwY = sin(2 * 3.14 * camAngle) * U,
vecFromX = forwX + forwY, // leftmost ray
vecFromY = forwY - forwX,
vecToX = forwX - forwY, // rightmost ray
vecToY = forwY + forwX;
for (int i = 0; i < SCREEN_W; ++i) // process each screen column
{
// interpolate rays between vecFrom and vecTo
int rayDx = (SCREEN_W - 1 - i) * vecFromX / SCREEN_W + (vecToX * i) / SCREEN_W,
rayDy = (SCREEN_W - 1 - i) * vecFromY / SCREEN_W + (vecToY * i) / SCREEN_W,
dir,
dist = castRay(camX,camY,rayDx,rayDy,&dir);
wallHeight[i] = perspective(dist);
wallDir[i] = dir;
}
for (int i = 0; i < 10; ++i)
putchar('\n');
drawScreen();
char c = getchar();
switch (c) // movement
{
case 'a': camAngle += 30; break;
case 'd': camAngle -= 30; break;
case 'w': camX += forwX / 2; camY += forwY / 2; break;
case 's': camX -= forwX / 2; camY -= forwY / 2; break;
case 'q': quit = 1; break;
default: break;
}
}
return 0;
}
```
How to make this more advanced? Here are some hints and tips:
- **textured walls**: This is pretty simply, the ray hit basically gives us a horizontal texturing coordinate, and we simply stretch the texture vertically to fit the wall. I.e. when the ray hits a wall, we take the hit coordinate along the principal axis of the wall (e.g. for vertical hit we take the Y coordinate) and [mod](mod.md) it by the fixed point unit which will give us the texturing coordinate. This coordinate tells us the column of the texture that the rendered column shall have; we read this texture column and render it stretched vertically to fit the column height given by the perspective. Note that for [cache](cache.md) friendliness ([optimization](optimization.md)) textures should be stored column-wide in memory as during rendering we'll be reading the texture by columns (row-wise stored textures would make us jump wide distances in the memory which CPU caches don't like).
2024-03-08 16:56:58 +01:00
- **textured floor/ceiling**: Something akin [mode7](mode7.md) rendering can be used.
- **sliding door**: TODO
- **jumping**: Camera can easily be shifted up and down. If we are to place the camera e.g. one fixed point unit above its original position, then for each column we render we compute, with perspective applied to this one fixed point unit (the same way with which we determine the column size on the screen) the vertical screen-space offset of the wall and render this wall column that many pixel lower.
- **looking up/down**: Correct view of a camera that's slightly tilted up/down can't be achieved (at least not in a reasonably simple way), but there's a simple trick for faking it -- camera shearing. Shearing literally just shifts the rendered view vertically, i.e. if we're to look a bit up, we render that same way as usual but start higher up on the screen (in the part of the rendered image that's normally above the screen and not visible), so that the vertical center of the screen will be shifted downwards. For smaller angles this looks [good enough](good_enough.md).
- **multilevel floor/ceiling**: This is a bit more difficult but it can be done e.g. like this: for each level square we store its floor and ceiling height. When casting a ray, we will consider any change in ceiling and/or floor height a hit, AND we'll need to return multiple of those hits (not just the first one). When we cast a ray and get a set of such hits, from each hit we'll know there are tiny walls on the floor and/or ceiling, and we'll know their distances. This can be used to correctly render everything.
- **different level geometry**: In theory the level doesn't have to be a square grid but some kind of another representation, or we may keep it a square grid but allow placement of additional shapes in it such as cylinders etc. Here you simply have to figure out how to trace the rays so as to find the first thing it hits.
- **adding [billboards](billboard.md) (sprites)**: TODO
- **reflections**: We can make our 2D raycaster a 2D [raytracer](raytracing.md), i.e. when we cast a camera ray and it hits a reflective wall (a mirror), we cast another, secondary reflected ray and trace it to see which wall it hits, i.e. which wall will get reflected in the reflective wall.
2024-08-31 14:44:45 +02:00
- **partly transparent walls**: We can make some walls partially transparent, both with [alpha blending](alpha.md) or textures with transparent pixels. In both cases we'll have to look not just for the first hit of the ray, but also for the next.
2024-09-09 14:10:06 +02:00
## 3D Raycasting
Here is a simple example of 3D raycasting in [C](c.md). To show it's possible it's only written using [fixed point](fixed_point.md). We only have two kinds of shapes in the scene: [spheres](sphere.md) and [planes](plane.md). For simplicity we also don't do many [optimizations](optimization.md). Here's the code:
```
#include <stdio.h>
#define RES_X 64 // picture width
#define RES_Y 30 // picture height
#define U 1024 // fixed point unit
#define INF 0x00ffffff // infinity value
char palette[] = "WM0KXkxocl;:,'. "; // ASCII shading palette
typedef struct { int e[3]; } V3; // 3D vector
typedef struct { V3 p0; V3 p1; } Ray; // ray
#define V(vec,el) (vec).e[el] // short for accessing vector elements
unsigned int sqrtInt(unsigned int x) // simple square root
{
unsigned int a = 1, b = x;
while (b > a) { a <<= 1; b >>= 1; }
a = (a + b) >> 1;
while (a * a > x) a--;
return a;
}
int squared(int x) { return x * x; }
V3 v3(int x, int y, int z) // creates a vector
{
V3 v; V(v,0) = x; V(v,1) = y; V(v,2) = z; return v;
}
int v3Len(V3 v) // vector length
{
return sqrtInt(V(v,0) * V(v,0) + V(v,1) * V(v,1) + V(v,2) * V(v,2));
}
V3 v3Minus(V3 a, V3 b) // vector subtraction
{
return v3(V(a,0) - V(b,0),V(a,1) - V(b,1),V(a,2) - V(b,2));
}
int v3Dot(V3 a, V3 b) // dot product
{
return (V(a,0) * V(b,0) + V(a,1) * V(b,1) + V(a,2) * V(b,2)) / U;
}
V3 v3Interpolate(V3 a, V3 b, int t)
{
return v3(V(a,0) + (t * (V(b,0) - V(a,0))) / U,
V(a,1) + (t * (V(b,1) - V(a,1))) / U,V(a,2) + (t * (V(b,2) - V(a,2))) / U);
}
V3 v3Normalize(V3 v)
{
int l = v3Len(v);
return l != 0 ? v3((V(v,0) * U) / l,(V(v,1) * U) / l,
(V(v,2) * U) / l) : v3(U,0,0);
}
V3 rayVsPlane(Ray r, int coord, int n, V3 *normal)
{
int t = V(r.p1,coord) - V(r.p0,coord);
if (t == 0) // prevent division by zero
return v3(INF,INF,INF);
t = ((n - V(r.p0,coord)) * U) / t;
if (t < 0)
return v3(INF,INF,INF);
if (normal)
{
V3 dir = v3Minus(r.p1,r.p0);
*normal = v3(0,0,0);
V(*normal,coord) = V(dir,coord);
*normal = v3Normalize(*normal);
}
return v3Interpolate(r.p0,r.p1,t);
}
V3 rayVsSphere(Ray ray, V3 center, int radius, V3 *normal)
{
V3 dir = v3Normalize(v3Minus(ray.p1,ray.p0)); // normalized direction
int tmp = v3Dot(dir,v3Minus(ray.p0,center));
int diff = // for solving quadratic equation
squared(tmp) - (squared(v3Len(v3Minus(ray.p0,center))) - squared(radius));
if (diff < 0) // no solution to quadratic equation
return v3(INF,INF,INF);
diff = sqrtInt(diff);
tmp *= -1;
tmp += (tmp + diff > tmp - diff) ? -1 * diff : diff;
if (tmp < 0) // hit behind camera?
return v3(INF,INF,INF);
V3 result =
v3((tmp * V(dir,0)) / U,(tmp * V(dir,1)) / U,(tmp * V(dir,2) / U));
if (normal)
*normal = v3Normalize(v3Minus(result,center));
return result;
}
int main(void)
{
V3 viewport = v3(U,(3 * U) / 4,U); // bottom right point of vieport
V3 light = v3Normalize(v3(-U,U/2,-U/2)); // light direction
Ray r; r.p0 = v3(0,0,0); r.p1 = viewport;
for (int y = 0; y < RES_Y; ++y) // draw columns
{
for (int x = 0; x < RES_X; ++x) // draw lines
{
V(r.p1,0) = -1 * V(viewport,0) + (x * 2 * V(viewport,0)) / (RES_X - 1);
V(r.p1,1) = -1 * V(viewport,1) + (y * 2 * V(viewport,1)) / (RES_Y - 1);
int closestDist = INF;
int object = 0;
V3 hit = v3(INF,INF,INF); // intersection point
V3 n; // hit normal
char pixel = ' ';
while (object >= 0) // cast ray against each object
{
switch (object) // here we define scene objects
{
case 0: hit = rayVsSphere(r,v3(2 * U/3,-U/3,3 * U/2),U/2,&n); break;
case 1: hit = rayVsSphere(r,v3(-U,1,6 * U),3 * U,&n); break;
case 2: hit = rayVsSphere(r,v3(-3 * U/2,U,3 * U),3 * U/2,&n); break;
case 3: hit = rayVsPlane(r,1,U,&n); break;
case 4: hit = rayVsPlane(r,0,-2 * U,&n); break;
case 5: hit = rayVsPlane(r,2,6 * U,&n); break;
default: object = -123; break; // this stops further checking
}
int dist = V(hit,2); // Z component as perpendicular dist.
if (dist < closestDist)
{
pixel = V(hit,0) == INF ? 15 :
8 + (8 * v3Dot(n,light)) / U // shade by angle of light
+ object; // this gives each object a bit different color
pixel = palette[pixel < 0 ? 0 : (pixel > 15 ? 15 : pixel)];
closestDist = dist;
}
object++;
}
putchar(pixel);
}
putchar('\n');
}
return 0;
}
```
And voila, this is what we get:
```
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;;;;;;;;;;;;;ccccccco;;;;;;;;;;;
;lllllllccco;;;;;;;;;;;;:::;;;;llccco;;;;;;;;
::;;;;lllllllcccoox;;;;:,,,,,:::::;;llcccx;;;;;;
::::;;;;;lllllllcccoxx;,''''',,,,:::;;;llcco;;;;;
,,::::;;;;;lllllllcccoo''..''''',,,,:::;;llccx;;;;
,,,:::::;;;;lllllllccc'.......'''',,,:::;;lcco;;;;
,,,,,:::::;;;;lllllllcc. ......''',,,::;;llcc;;;;
',,,,,:::::;;;;lllllll. ....'''',,:::;llcc;;;;
'',,,,,::::;;;;;llllll. ....''',,,::;;lc;;;;;
;;;llllcccooxxx:::;;;;;llllll ....'',,,::;llc;;;;;
:;;;;;;;lllccccoooxxkk:;;;;;lllll ....'',,,::;lc;;;;;;
:::;;;;;;;llllccccoooxxkkX;;;;llllll ...''',,::;lc;;;;;;;
,:::;;;;;;;;llllccccoooxxkkXX;;;lllllll ....'',,::;l;;;;;;;;;
,::::;;;;;;;;llllcccooooxxkkXX;;;llllllcc....''',,:;;;;;;;;;;;;;
,::::;;;;;;;;;llllcccoooxxxkkXK;;llllllccoo;;;;;;;;;;;;;;;;;;;;;
,::::;;;;;;;;;llllccccoooxxxkkX;;;llllllcox.....................
,:::::;;;;;;;;;llllccccoooxxxkkX;;llllllc.......................
,,:::::;;;;;;;;;llllccccoooxxkkX;;;.............................
,,:::::;;;;;;;;;;llllccccoooxxkk................................
,,,:::::;;;;;;;;;lllllcccoooxxx.................................
,,,,:::::;;;;;;;;;llllccccooo...................................
,,,,:::::;;;;;;;;;;llllcccc.....................................
,,,,,:::::;;;;;;;;;;lll.........................................
',,,,,:::::;;;;;;...............................................
'',,,,,::.......................................................
................................................................
................................................................
```
## See Also
- [raytracing](raytracing.md)
- [pathtracing](pathtracing.md)
- [conetracing](conetracing.md)
2024-09-12 20:17:25 +02:00
- [raymarching](raymarching.md)
- [3D rendering](3d_rendering.md)