From 19f373bda7e73c0ae2bfe551ae250852281a00f7 Mon Sep 17 00:00:00 2001 From: Miloslav Ciz Date: Thu, 31 Mar 2022 21:12:10 +0200 Subject: [PATCH] Add dyn prog --- dynamic_programming.md | 45 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) create mode 100644 dynamic_programming.md diff --git a/dynamic_programming.md b/dynamic_programming.md new file mode 100644 index 0000000..b43a494 --- /dev/null +++ b/dynamic_programming.md @@ -0,0 +1,45 @@ +# Dynamic Programming + +Dynamic programming is a programming technique that can be used to make many algorithms more efficient (faster). It works on the principle of repeatedly breaking given problem down into smaller subproblems and then solving one by one from the simplest and remembering already calculated results that can be reused later. + +It is usually contrasted to the *[divide and conquer](divide_and_conquer.md)* (DAC) technique which at the first sight looks similar but is in fact quite different. DAC also subdivides the main problem into subproblems, but then solves them [recursively](recursive.md), i.e. it is a top-down method. DAC also doesn't remember already solved subproblem and may end up solving the same problem multiple times, wasting computational time. Dynamic programming on the other hand starts solving the subproblems from the simplest ones -- i.e. it is a **bottom-up** method -- and remembers solutions to already solved subproblems in some kind of a [table](lut.md) which makes it possible to quickly reuse the results if such subproblem is encountered again. The order of solving the subproblems should be made such as to maximize the efficiency of the algorithm. + +It's not the case that dynamic programming is always better than DAC, it depends on the situation. Dynamic programming is effective **when the subproblems overlap** and so the same subproblems WILL be encountered multiple times. But if this is not the case, DAC can easily be used and memory for the look up tables will be saved. + +## Example + +Let's firstly take a look at the case when divide and conquer is preferable. This is for instance the case with many [sorting](sorting.md) algorithms such as [quicksort](quicksort.md). Quicksort recursively divides parts of the array into halves and sorts each of those parts: sorting each of these parts is a different subproblem as these parts (at least mostly) differ in size, elements and their order. The subproblems therefore don't overlap and applying dynamic programming makes little sense. + +But if we tackle a problem such as computing *N*th [Fibonacci number](fibonacci_number.md), the situation changes. Considering the definition of *N*th Fibonacci number as a *"sum of N-1th and N-2th Fibonacci numbers"*, we might naively try to apply the divide and conquer method: + +``` +int fib(int n) +{ + return (n == 0 || n == 1) ? + n : // start the sequence with 0, 1 + fib(n - 1) + fib(n - 2); // else add two previous +} +``` + +But we can see this is painfully slow as calling `fib(n - 2)` computes all values already computed by calling `fib(n - 1)` all over again, and this inefficiency additionally appears inside these functions recursively. Applying dynamic programming we get a better code: + +``` +int fib(int n) +{ + if (n < 2) + return n; + + int current = 1, prev = 0; + + for (int i = 2; i <= n; ++i) + { + int tmp = current; + current += prev; + prev = tmp; + } + + return current; +} +``` + +We can see the code is longer, but it is faster. In this case we only need to remember the previously computed Fibonacci number (in practice we may need much more memory for remembering the partial results). \ No newline at end of file