45 lines
3.4 KiB
Markdown
45 lines
3.4 KiB
Markdown
# Dynamic Programming
|
|
|
|
Dynamic programming is a [programming](programming.md) technique that allows us to increase efficiency of certain types of [algorithms](algorithm.md) (efficiency usually meaning faster execution). It can be seen as an [optimization](optimization.md) technique that works on the principle of repeatedly breaking given problem down into smaller subproblems and then solving one by one from the simplest and remembering already calculated results that can be reused later.
|
|
|
|
It is frequently contrasted with the *[divide and conquer](divide_and_conquer.md)* (DAC) method which at the first sight looks similar but is in fact quite different. DAC also subdivides the main problem into subproblems, but then solves them [recursively](recursion.md) and separately, i.e. it is a top-down method. DAC also doesn't remember already solved subproblem and may end up solving the same problem multiple times, wasting computation. Dynamic programming on the other hand starts solving the subproblems from the simplest ones -- i.e. it is a **bottom-up** method -- and remembers solutions to already solved subproblems in some kind of a [table](lut.md) which enables quick reusing of the results should the same subproblem be encountered again. The order of solving the subproblems should be chosen so as to maximize the efficiency of this approach.
|
|
|
|
It is NOT the case that dynamic programming would always beat DAC, all depends on the situation. Dynamic programming is effective **when the subproblems overlap** and thus the same subproblems WILL be encountered multiple times -- this is the fact that dynamic programming exploits. Should this not be the case -- i.e. if we are solving a problem that doesn't exhibit this property -- DAC should be used instead.
|
|
|
|
## Example
|
|
|
|
For starters let's view a case when divide and conquer would be preferable: this is true for instance about many [sorting](sorting.md) algorithms including [quicksort](quicksort.md) and others. Quicksort [recursively](recursion.md) splits parts of the array into halves and sorts each one separately: sorting each part is a different subproblem given the parts (at least generally) differ in size, elements and their order. The subproblems therefore don't overlap and applying dynamic programming makes little sense.
|
|
|
|
But if we tackle a problem such as computing *N*th [Fibonacci number](fibonacci_number.md), the situation changes. Considering the definition of *N*th Fibonacci number as a *"sum of N-1th and N-2th Fibonacci numbers"*, we might naively try to apply the divide and conquer method:
|
|
|
|
```
|
|
int fib(int n)
|
|
{
|
|
return (n < 2) ?
|
|
n : // start the sequence with 0, 1
|
|
fib(n - 1) + fib(n - 2); // else add two previous
|
|
}
|
|
```
|
|
|
|
However we make the observation that this is painfully slow due to the fact that calling `fib(n - 2)` computes all values already computed by calling `fib(n - 1)` all over again, and this inefficiency additionally appears inside these functions recursively. Applying dynamic programming we get a better code:
|
|
|
|
```
|
|
int fib(int n)
|
|
{
|
|
if (n < 2)
|
|
return n;
|
|
|
|
int current = 1, prev = 0;
|
|
|
|
for (int i = 2; i <= n; ++i)
|
|
{
|
|
int tmp = current;
|
|
current += prev;
|
|
prev = tmp;
|
|
}
|
|
|
|
return current;
|
|
}
|
|
```
|
|
|
|
Now the code is longer, but it is faster. In this specific case we only need to remember the previously computed Fibonacci number (in practice we may need much more memory for remembering the partial results). |