Dynamic Programming

Recursion uses the top-down approach to solve the problem i.e. It begin with core (main) problem then breaks it into sub-problems and solve these sub-problems in similar manner. In this approach same sub-problem can occur multiple times and consume more CPU cycle, hence increase the time complexity. Whereas in Dynamic programming (also called DP), same sub-problem will be solved once and the result will be stored in the memory. Later, when the same sub-problem occurs, the result will directly be fetched from memory without computing the result again. This optimizes the time complexity as redundant computations are not done.
The core idea of Dynamic Programming is to avoid repeated work by remembering partial results and this concept finds its application in a lot of real life situations.



To depict how Dynamic Programming can provide an optimized solution to a problem, consider a recursive algorithm for calculating nth Fibonacci number.


For calculating nth Fibonacci number recursively, we use the following recurrence relation: fib(n) = fib(n-1) + fib(n-2), where fib(n-1) and fib(n-2) are sub-problems for original problem of calculating fib(n).

A tree depicting the function calls for calculating fib(5) is shown below:



In the above figure, the nodes marked in RED are the sub-problems which occur multiple times while solving for fib(5) and they are calculated each time they occur. We use Dynamic Programming to get rid of such redundant calculations by storing results for each of them in the memory and hence avoiding multiple calculations for same sub-problem by retrieving the result from the memory.


When to use Dynamic Programming?

If the given problem can be broken up in to smaller sub-problems and these smaller sub-problems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lapping sub-problems (repeating sub-problems), then it’s a big hint for DP. Also, the optimal solutions to the sub-problems contribute to the optimal solution of the given problem (referred to as the Optimal Substructure Property).

Over-lapping sub-problems and optimal substructure are the two major properties of a problem that suggest that the given problem can be solved using Dynamic Programming.

There are two approaches for solving a problem using dynamic programming:


1. Top-down approach: Here, we start solving the problem by breaking it down from the top. The main problem is broken into sub-problems, they are then solved and the results are stored in the memory. The sub-problems are again broken into smaller sub-problems and the same process repeats. If a sub-problem is found whose result is already computed, then the result is fetched from the memory. This is usually easy to think of and very intuitive. This method is called Memoization.


2. Bottom-up approach: Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial sub-problem, up towards the given problem. In this process, it is guaranteed that the sub-problems are solved before solving the problem. This method is called Tabulation.

Some of the common problems in computer science to begin with dynamic programming are:


0-1 Knapsack Problem, Longest Common Subsequence, Longest Increasing Subsequence, Coin Change Problem, Rod-Cutting Problem, Matrix-Chain Multiplication. And there are many of them, but the above mentioned are the ones which a beginner can find it easier to start with.


Note:


1. In dynamic Programming all the sub-problems are solved even those which are not needed, but in recursion only required sub-problem are solved. So solution by dynamic programming should be properly framed to remove this ill-effect.


2. Dynamic programming and recursion work in almost similar way in the case of non-overlapping sub-problem. In such problem other approaches could be used like “divide and conquer”.



References for further study:


Tabulation and Memoization:  https://goo.gl/SYiTKV

Standard DP Problems:  https://goo.gl/8bCA97

Comments

Popular Posts