You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

13 KiB

Logic Circuit

Logic circuits are circuits made of logic gates that implement Boolean functions, i.e. they are "graphical schematics for processing 1s and 0s". They are used to design computers on quite a low level. Logic circuits are a bit similar to electronic circuits but are a level of abstraction higher: they don't work with continuous voltages but rather with discrete binary logic values: 1s and 0s. This abstraction makes logic circuits kind of "portable" circuit descriptions independent of any specific transistor technology, or even of electronics itself (as logical circuit may in theory be realized even mechanically, with fluids or in other similarly wild ways). Logical circuits can be designed, simulated and synthesized to actual hardware description with specialized software and languages such as VHDL.

  0       ___ 1     ___       1
 x ------|NOT|-----|AND|------- a
         |___|  .--|___|
  1            /
 y -------.---'
           \    ___ 1   ___   0
  0         '--|OR |---|NOT|--- b
 z ------------|___|   |___|

Example of a logic circuit with three inputs (x, y, z) and two outputs (a, b), with example input values (0, 1, 0) transformed to output values (1, 0).

Generally a logic circuit can be seen as a "black box" that has N input bits and M output bits. Then we divide logic circuits into two main categories:

  • combinational: The output values only depend on the input values, i.e. the circuit implements a pure mathematical function. Behavior of such circuit can be described with a truth table, i.e. a table that for any combination of input values list their corresponding output. Examples of combinational circuits may be the very basic of logic circuits, the AND and OR functions.
  • sequential: Extension of the former, here the output values generally depend on the input values AND additionally also on the internal state of the circuit, i.e. the circuit has a kind of memory (it can be seen as a finite state machine). The internal state is normally implemented with so called flip-flops (logic gates that take as input their own output). Normal truth tables can't be used for describing these circuits (only if we include the internal state in them). These circuits also often work with clock synchronization, i.e. they have a specialized input called clock that periodically switches between 1 and 0 which drives the circuit's operation (this is where clock frequency and overclocking in CPUs comes from).

Logic circuits can be drawn simply as "boxes" (which one the base level are the basic logic gates such as AND, OR etc.) connected with lines ("wires", but again not really electronic wires as here only 1 or 0 can be carried by such wire). But as mentioned, their behavior can also be described with a truth table (which however says nothing about the internals of the circuit) or a boolean expression, i.e. an algebraic expression that for each of the circuit outputs defines how it is computed from the outputs, for example a = !x & y and b = !(y | z) for the above drawn example circuit. Each of these types of representation has its potential advantages -- for example the graphical representation is a very human-friendly representation while the algebraic specification allows for optimization of the circuits using algebraic methods. Many hardware design languages therefore allow to use and combine different methods of describing logic circuits (some even offer more options such as describing the circuit behavior in a programming language such as C).

With combinational logic circuits it is possible to implement any boolean function (i.e. "functions only with values 1 and 0"); undecidability doesn't apply here as we're not dealing with Turing machines computations because the input and output always has a finite, fixed number of bits, the computation can't end up in an infinite loop as there are no repeating steps, just a straightforward propagation of input values to the output. It is always possible to implement any function at least as a look up table (which can be created with a multiplexer). Sequential logic circuits on the other hand can be used to make the traditional computers that work in steps and can therefore get stuck in loop and so on.

Once we've designed a logic circuit, we can optimize it which usually means making it use fewer logic gates, i.e. make it cheaper to manufacture (but optimization can also aim for other things, e.g. shortening the maximum length from input to output, i.e. minimizing the circuit's delay).

Some common logic circuits include (note that many of these can be implemented both as a combinational or sequential circuit):

  • adder: Performs addition. It has many parameters such as the bit width, optional carry output etc.
  • multiplier: Performs multiplication.
  • multiplexer (mux): Has M address input bits plus another 2^M data input bits. The output of the gate is the value of Nth data bit where N is the number specified by the address input. I.e. the circuit selects one of its inputs and sends it to the output. This can be used to implement e.g. memory, look up tables, bus arbiters and many more things.
  • demultiplexer (demux): Does the opposite of multiplexer, i.e. has one M address inputs and 1 data input and 2^M outputs. Depending on the given address, the input is redirected to Nth output (while other outputs are 0).
  • RS flip-flop: Possibly the simplest flip-flop (a sequential circuit) with two inputs, R (reset) and S (set), which can remember 1 bit of information (this bit can be set to 1 or 0 using the inputs). It can be implemented with two NOR gates.
  • decoder: Has M inputs and 2^M outputs. It sets Nth output to 1 (others are 0) where N is the binary number on the input. I.e. decoder converts a binary number into one specific signal. It can be implemented as a demultiplexer whose data input is always 1.
  • encoder: Does the opposite of encoder, i.e. has 2^M inputs and M outputs, expects exactly one of the inputs to be 1 and the rest 0s, the output is a binary number representing the input that's 1.
  • ALU (arithmetic logic unit): A more complex circuit capable of performing a number of logic and arithmetic operations. It is a part of a CPU.
  • ...
  • TODO: flip-flops, more

Minimization/Transformation Of Logic Circuits

Minimization (or optimization) is a crucial and extremely important part of designing logic circuits -- it means finding a logically equivalent circuit (i.e. one that behaves the same in regards to its input/output, that is its truth table stays the same) that's smaller (composed of fewer gates); the motivation, of course, being saving resources (money, space, ...) and potentially even making the circuit faster. We may also potentially perform other transformations depending on what we need; for example we may wish to minimize the delay (longest path from input to output) or transform the circuit to only use NAND gates (because some hardware manufacturing technologies greatly prefer NAND gates). All in all when designing a logic circuit, we basically always perform these two main steps:

  1. Design the circuit to do what we want.
  2. Minimize (and/or otherwise transform) it so as to optimize it.

Some basic methods of minimization include:

  • algebraic methods: We use known formulas to simplify the logic expression representing our circuit. This is basically the same as simplifying fractions and similar mathematical expressions, just in the realm of boolean algebra. Some common formulas we use:
    • De Morghan Laws: !x & !y = !(x | y), !x | !y = !(x & y)
    • distributivity: x | (y & z) = (x | y) & (x | z), x & (y | z) = (x & y) | (x & z)
    • x | !x = 1, x & !x = 0, x | x = x, x & x = x
    • x | (!x & y) = x | y, x & (!x | y) = x & y
    • ...
  • Karnaugh maps: One of the most basic methods, simple algorithm using a table.
  • Quine McCluskey: A bit more advanced method.
  • ...

Example of minimization will follow in the example section.

Example

One of the simplest logic circuits is the two-bit half adder which takes two input bits, x and y, and outputs their sum s and carry over c (which will become important when chaining together more such adders). Let us write a truth table of this circuit (note that adding in binary behaves basically the same as how we add by individual digits in decimal):

x y s c
0 0 0 0
1 0 1 0
0 1 1 0
1 1 0 1

Notice that this circuit is combinational -- its output (s and c) only depends on the input values x and y and nothing else, which is why we can write such a nice table.

OK, so now we have the circuit behavior specified by a truth table, let's continue by designing the actual circuit that implements this behavior. Let us start by writing a logic expression for each output (& = AND, | = OR, ! = NOT, ^ = XOR):

s = x ^ y

c = x & y

We see the expressions are quite simple, let us now draw the actual circuit made of the basic logic gates:

             ___
 x --.------|XOR|--- s
      \  .--|___|
       \/
       /\    ___
      /  '--|AND|--- c
 y --'------|___|
 

And that's it -- this circuit is so simple we can't simplify it further, so it's our actual result (as an exercise you may try to imagine we don't have a XOR gate available and try to replace it by AND, OR and NOT gates).

Next we can expand our half added to a full adder -- a full adder takes one more input z, which is a carry over from a previous adder and will be important when chaining adders together. Let's see the truth table of a full adder:

x y z s c
0 0 0 0 0
1 0 0 1 0
0 1 0 1 0
1 1 0 0 1
0 0 1 1 0
1 0 1 0 1
0 1 1 0 1
1 1 1 1 1

Let's try to make boolean expressions for both outputs now. We may notice c is 1 exactly when at least two of the inputs are 1, which we may write as

c = (x & y) | (x & z) | (y & z)

However, using the formula (a & c) | (b & c) = (a ^ b) & c , we can simplify (minimize) this to an expression that uses one fewer gate (notice there is one fewer operator)

c = (x & y) | ((x ^ y) & z)

The expression for s is not so clear though -- here we can use a method that always works: we simply look at all the lines in the truth table that result in s = 1 and write them in "ORed" form as

s = (x & !y & !z) | (!x & y & !z) | (!x & !y & z) | (x & y & z)

Which we can also actually minimize (as an exercise try to figure out the formulas we used :p)

s = ((x ^ y) & !z) | (!(x ^ y) & z)

s = (x ^ y) ^ z

Now finally we can draw the full adder circuit

             ___
x ---.------|AND|--------------.    ___ 
      \ .---|___|           ___ '--|OR |--- c
       /              .----|AND|---|___|
y --.-' \            / .---|___|
     \   \    ___   / /
      \   '--|XOR|-'----.
       '-----|___|  /    \   ___
                   /      '-|XOR|---------- s
z ----------------'---------|___|

Now let us spectacularly combine one half adder (HA) and three full adders (FA) into one magnificent 4 bit adder. It will be adding two 4bit numbers, a (composed of bits a0 to a3) and b (composed of bits b0 to b3). Also notice how the carry bits of lower adders are connected to carry inputs of the higher full adders -- this is the same principle we use when adding numbers manually with pen and paper. The resulting sum s is composed of bits s0 to s3. Also keep in mind the circuit is still combinational, i.e. it has no memory, no clock input and adds the numbers in a "single run".

         ___
a3 -----|FA |-- c3
b3 -----|   |------- s3
     .--|___|
     '--------.
         ___  | c2
a2 -----|FA |-'
b2 -----|   |------- s2
     .--|___|
     '--------.
         ___  | c1
a1 -----|FA |-'
b1 -----|   |------- s1
     .--|___|
     '--------.
         ___  | c0
a0 -----|HA |-'
b0 -----|___|------- s0

TODO: sequential one?