Update tuto

This commit is contained in:
Miloslav Ciz 2022-04-02 22:19:38 +02:00
parent d4d83bf6c1
commit decd2bd9d5
6 changed files with 296 additions and 27 deletions

View file

@ -1,10 +1,10 @@
# Fixed Point
Fixed point arithmetic is a simple and often [good enough](good_enough.md) way of dealing with fractional (non-integer) numbers, as opposed to [floating point](floating_point) which is from our point of view considered a bad solution for most programs.
Fixed point arithmetic is a simple and often [good enough](good_enough.md) way of dealing with fractional (non-integer) numbers, as opposed to [floating point](float.md) which is from our point of view considered a bad solution for most programs.
In 99% cases when you think you need floating point, fixed point is actually what you need.
Probably in 99% cases when you think you need floating point, fixed point is actually what you need.
Fixed point has these advantages over floating point:
Fixed point has at least these advantages over floating point:
- It is **easier to understand and better predictable**, less tricky, [KISS](kiss.md), [suckless](sukless.md). (Float's IEEE 754 standard is 58 pages long, the paper *What Every Computer Scientist Should Know About Floating-Point Arithmetic* has 48 pages.)
- **Doesn't require a special hardware coprocessor** and so doesn't introduce a [dependency](dependency.md). Programs using floating point will run extremely slowly on systems without float hardware support as they have to emulate the complex hardware in software, while fixed point will run just as fast as integer arithmetic.
@ -13,16 +13,16 @@ Fixed point has these advantages over floating point:
## How It Works
Fixed point uses some fixed (hence the name) number of digits (bits in binary) for the fractional part (whereas floating point's fractional part varies).
Fixed point uses some fixed (hence the name) number of digits (bits in binary) for the fractional part (whereas floating point's fractional part varies in length).
So for example, when working with 16 bit numbers, we may choose to use 12 bits for the integer part and the remaining 4 for the fractional part. This puts an imaginary radix point after the first (highest) 12 bits of the number in binary representation, like this:
So for example when working with 16 bit numbers, we may choose to use 12 bits for the integer part and the remaining 4 for the fractional part. This puts an imaginary radix point after the first (highest) 12 bits of the number in binary representation, like this:
```
abcdefghijkl mnop
------------.----
```
The whole number `abcdefghijkl` can take on values from 0 to 4095 (or -2048 to 2047 when using [two's complement](twos_complement.md) -- yes, fixed point works as signed type too). The fraction can take 16 (2^4) values, meaning we subdivide every whole number into 16 parts (we say our *scaling factor* is 1/16).
The whole number `abcdefghijkl` can take on values from 0 to 4095 (or -2048 to 2047 when using [two's complement](twos_complement.md) -- yes, fixed point works as signed type too). The fraction can take 16 (2^(4)) values, meaning we subdivide every whole number into 16 parts (we say our *scaling factor* is 1/16).
You can also imagine it like this: when you e.g. write a 3D game and measure your dimensions in meters, fixed point just means considering a denser grid; so you just say your variables will count centimeters rather than meters and everything will just work. (Practically however you wouldn't use centimeters but power of 2 subdivisions, e.g. 256ths of a meter.)