This commit is contained in:
Miloslav Ciz 2024-12-15 20:50:53 +01:00
parent 9284009450
commit dcb8228ac1
10 changed files with 2096 additions and 1898 deletions

View file

@ -13,15 +13,17 @@ Of course the binary system didn't appear from nowhere, people in ancient times
## Boolean Algebra ("True/False Logic")
In binary we start by working with single [bits](bit.md) -- each bit can hold two values, 1 and 0. We may see bits now like "simple numbers", we'll want to do operations with them, but they can only ever be one of the two values. Though we can interpret these values in any way -- e.g. in electronics we see them as high vs low [voltage](voltage.md) -- in mathematics we traditionally turn to using [logic](logic.md) and interpret them as meaning *true* (1) and *false* (0). This will further allow us to apply all the knowledge and theory we have gathered about logic, such as formulas that allow us to simplify binary expressions etc.
*For more detail see also [logic gate](logic_gate.md).*
Next we want to define "operations" we can perform on single bits -- for this we use so called **[Boolean](bool.md) algebra**, which is originally a type of abstract algebra that works with [sets](set.md) and their operations such as conjunction, disjunction etc. Boolean algebra can be seen as a sort of simplified version of what we do in "normal" elementary school algebra -- just as we can add or multiply numbers, we can do similar things with individual bits, we just have a bit different operations such as logic [AND](and.md), logic [OR](or.md) and so on. Generally Boolean algebra can operate with more than just two values, however that's more interesting to mathematicians; for us all we need now is a binary Boolean algebra -- that's what programmers have adopted for their field. It is the case that in context of computers and programming we implicitly understand Boolean algebra to be the one working with 1s and 0s, i.e. the binary version, so the word **"boolean"** is essentially used synonymously with "binary" around computers. Many [programming languages](programming_language.md) have a [data type](data_type.md) called `boolean` or `bool` that allows represents just two values (*true* and *false*).
In binary we start by working with single [bits](bit.md) -- each bit can hold two values, 1 and 0. At this point we may see bits like simple [numbers](number.md) and we'll want to start performing "operations" with them just like we are used to with ordinary numbers (What would numbers be good for if we could add them, subtract them etc.?), but it will still hold that bits can only ever hold one of the two values, 0 or 1, so it's naturally going to be a bit different. Though we can interpret what the 0 and 1 values mean in any way -- e.g. in electronics as high vs low [voltage](voltage.md) -- in mathematics we traditionally turn to go along with [logic](logic.md) and interpret them as *true* (1) and *false* (0). This interpretation is nice because math already has a lot of knowledge about laws of logic and this will transfer nicely to what we're doing now, so for example we'll be able to use various formulas that are already there and proven to work.
The very basic operations, or logic [functions](function.md), of Boolean algebra are:
Next we want to define these "operations" with bits -- for this we use so called **[Boolean](bool.md) algebra**, which is originally a type of abstract algebra that works with [sets](set.md) and operations such as conjunction, disjunction etc. Boolean algebra can be viewed as a sort of simplified version of what we do in "normal" elementary school algebra -- just as we can add or multiply numbers, we can do similar things with individual bits, we just have a bit different kinds of operations such as logical [AND](and.md) (similar to multiplication), logical [OR](or.md) (similar to addition) and so on. Generally Boolean algebra can operate with more than just two values (0 and 1), however that's more interesting to mathematicians; for us all we need now is a binary Boolean algebra -- that's what programmers have adopted. It is the case that in context of computers and programming we implicitly assume Boolean algebra to be the one working with 1s and 0s, i.e. the binary version, so the word **Boolean** is essentially used synonymously with "binary". Many [programming languages](programming_language.md) have a [data type](data_type.md) called `boolean` or `bool` that allows represents just two values (*true* and *false*).
- **NOT** (negation, `!`): Done with single bit, turns 1 into 0 and vice versa.
- **[AND](and.md)** (conjunction, `/\`): Done with two bits, yields 1 only if both input bits are 1, otherwise yields 0. This is similar to multiplication (1 * 1 = 1, 1 * 0 = 0, 0 * 1 = 0, 0 * 0 = 0) .
- **[OR](or.md)** (disjunction, `\/`): Done with two bits, yields 1 if at least one of the input bits is 1, otherwise yields 0. This is similar to addition (1 + 1 = 1, 1 + 0 = 1, 0 + 1 = 1, 0 + 0 = 0).
The very basic operations, or now we would rather say Boolean [functions](function.md), are:
- **NOT** (negation, `!`): Performed on a single bit, turns 1 into 0 and vice versa.
- **[AND](and.md)** (conjunction, `/\`): Performed on two bits, yields 1 only if both input bits are 1, otherwise yields 0. This is similar to multiplication (1 * 1 = 1, 1 * 0 = 0, 0 * 1 = 0, 0 * 0 = 0) .
- **[OR](or.md)** (disjunction, `\/`): Performed on two bits, yields 1 if at least one of the input bits is 1, otherwise yields 0. This is similar to addition (1 + 1 = 1, 1 + 0 = 1, 0 + 1 = 1, 0 + 0 = 0).
There are also other function such as [XOR](xor.md) (exclusive OR, is 1 exactly when the inputs differ) and negated versions of AND and OR (NAND and NOR, give opposite outputs of the respective non-negated function). The functions are summed up in the following table (we all these kinds of tables **truth tables**):
@ -32,7 +34,7 @@ There are also other function such as [XOR](xor.md) (exclusive OR, is 1 exactly
| 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
| 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 |
In fact there exists more functions with two inputs and one output (16 in total, computing this is left as exercise :]). However not all are named -- we only use special names for the commonly used ones, mostly the ones in the table above.
In fact there exists more functions with two inputs and one output (16 in total, computing this is left as an exercise :]). However not all have commonly established names -- we only use special names for the commonly used ones, mostly the ones in the table above.
An interesting thing is that we may only need one or two of these functions to be able to create all other function (this is called *functional completeness*); for example it is enough to only have *AND* and *NOT* functions together to be able to construct all other functions. Functions *NAND* and *NOR* are each enough by themselves to make all the other functions! For example *NOT x = x NAND x*, *x AND y = NOT (x NAND y) = (x NAND y) NAND (x NAND y)*, *x OR y = (x NAND x) NAND (y NAND y)* etc.
@ -58,28 +60,28 @@ Boolean algebra further tells us some basic laws we can use to simplify our expr
- NOT (x OR y) = NOT(x) AND NOT(y)
- ...
By combining all of these simple functions it is possible to construct not only operations with whole numbers and traditional algebra, but also a whole computer that renders 3D graphics and sends multimedia over the Internet. For more details see **[logic circuits](logic_circuit.md)**.
By combining all of these simple functions it is possible to go on and construct not only operations with whole numbers and the traditional algebra we know from school, but also a whole computer that renders 3D graphics and sends multimedia over the Internet. This is done by grouping multiple bits together to create a base-2 numeral system (described below), i.e. we'll go from working with single bits to working with GROUPS of bits -- single bits only allow us to represent two values, but a group of bits will allow us to store more. For example a group of 8 bits ([byte](byte.md)) lets us represent 256 distinct values, which we may interpret as whole numbers: 0 to 255. Now using the elementary functions shown above we can implement all the traditional operators for addition, subtraction, multiplication, division, ... and that's not all; we can go yet further and implement negative numbers, fractions, later on strings of text, and we can go on and on until we have a very powerful system for computation. For more detail see [logic gates](logic_gate.md) and [logic circuits](logic_circuit.md).
## Base-2 Numeral System
While we may use a single bit to represent two values, we can group more bits together and become able to represent more values; the more bits we group together, the more values we'll be able to represent as possible combinations of the values of individual bits. The number of bits, or "places" we have for writing a binary number is called a number of bits or **bit width**. A bit width *N* allows for storing 2^*N* values -- e.g. with 2 bits we can store 2^2 = 4 values: 0, 1, 2 and 3, in binary 00, 01, 10 and 11. With 3 bits we can store 2^3 = 8 values: 0 to 7, in binary 000, 001, 010, 011, 100, 101, 110, 111. And so on.
While we may use a single bit to represent two values, we can group more bits together and so gain the ability to represent more values; the more bits we group together, the more values we'll be able to represent as possible combinations of the values of individual bits. The number of bits, or "places" we have for writing a binary number is called a number of bits or **bit width**. A bit width *N* allows for storing 2^*N* values -- e.g. with 2 bits we can store 2^2 = 4 values: 0, 1, 2 and 3, in binary 00, 01, 10 and 11. With 3 bits we can store 2^3 = 8 values: 0 to 7, in binary 000, 001, 010, 011, 100, 101, 110, 111. And so on.
At the basic level binary works just like the [decimal](decimal.md) (base 10) system we're used to. While the decimal system uses powers of 10, binary uses powers of 2. Here is a table showing a few numbers in decimal and binary:
At the basic level binary works just like the [decimal](decimal.md) (base 10) system we're used to. While the decimal system uses powers of 10, binary uses powers of 2. Here is a table showing a few numbers in decimal and binary (with 4 bits):
| decimal | binary |
| ------- | ------ |
| 0 | 0 |
| 1 | 1 |
| 2 | 10 |
| 3 | 11 |
| 4 | 100 |
| 5 | 101 |
| 6 | 110 |
| 7 | 111 |
| 0 | 0000 |
| 1 | 0001 |
| 2 | 0010 |
| 3 | 0011 |
| 4 | 0100 |
| 5 | 0101 |
| 6 | 0110 |
| 7 | 0111 |
| 8 | 1000 |
| ... | ... |
**Conversion to decimal**: let's see an example that utilizes the facts mentioned above. Let's have a number that's written as 10135 in decimal. The first digit from the right (5) says the number of 10^(0)s (1s) in the number, the second digit (3) says the number of 10^(1)s (10s), the third digit (1) says the number of 10^(2)s (100s) etc. Similarly if we now have a number **100101** in binary, the first digit from the right (1) says the number of 2^(0)s (1s), the second digit (0) says the number of 2^(1)s (2s), the third digit (1) says the number of 2^(2)s (4s) etc. Therefore this binary number can be converted to decimal by simply computing 1 * 2^0 + 0 * 2^1 + 1 * 2^2 + 0 * 2^3 + 0 * 2^4 + 1 * 2^5 = 1 + 4 + 32 = **37**.
**Conversion to decimal**: let's see an example demonstrating things mentioned above. Let's have a number that's written as 10135 in decimal. The first digit from the right (5) says the number of 10^(0)s (1s) in the number, the second digit (3) says the number of 10^(1)s (10s), the third digit (1) says the number of 10^(2)s (100s) etc. Similarly if we now have a number **100101** in binary, the first digit from the right (1) says the number of 2^(0)s (1s), the second digit (0) says the number of 2^(1)s (2s), the third digit (1) says the number of 2^(2)s (4s) etc. Therefore this binary number can be converted to decimal by simply computing 1 * 2^0 + 0 * 2^1 + 1 * 2^2 + 0 * 2^3 + 0 * 2^4 + 1 * 2^5 = 1 + 4 + 32 = **37**.
```
100101 = 1 + 4 + 32 = 37
@ -108,7 +110,7 @@ NOTE: once we start grouping bits to create numbers, we typically still also kee
10010
```
All of these operations can be implemented just using the basic boolean functions -- see [logic circuits](logic_circuit.md) and [CPUs](cpu.md).
All of these operations can be implemented just using the basic boolean functions described in the section above -- see [logic circuits](logic_circuit.md) and [CPUs](cpu.md).
In binary it is very simple and fast to divide and multiply by powers of 2 (1, 2, 4, 8, 16, ...), just as it is simply to divide and multiple by powers of 10 (1, 10, 100, 1000, ...) in decimal (we just shift the radix point, e.g. the binary number 1011 multiplied by 4 is 101100, we just added two zeros at the end). This is why as a programmer **you should prefer working with powers of two** (your programs can be faster if the computer can perform basic operations faster).
@ -132,6 +134,7 @@ As anything can be represented with numbers, binary can be used to store any kin
- [nullary](nullary.md)
- [unary](unary.md)
- [ternary](ternary.md)
- [logic gate](logic_gate.md)
- [logic circuit](logic_circuit.md)
- [bit](bit.md)
- [hexadecimal](hexadeciaml.md)