# Dynamic Programming - Lifeline of Technical interviews

Dynamic programming problems are asked widely in technical interviews specially for freshers hiring. These problems generally don't require prior knowledge of any specific algorithm and the solutions are generally easy to code but hard to reach upon.

In this post we will look what "Dynamic Programming" is and then we will be discussing some dynamic programming problems asked in technical interviews of tech giants like Google, Facebook, Amazon, Microsoft.

**Dynamic Programming** is breaking down problem into smaller sub-problems and storing the results of those subproblems to ensure that each sub-problem is solved only once.

**What are sub-problems** Sub-problems are smaller versions of the original problem, If formulated correctly, sub-problems build on each other in order to obtain the solution to the original problem.

Let's understand this with the help of Fibonaci example let's say we are required to find 5th fibonacci number now we know.

**Step 1.**

$fib(5) = fib(4) + fib(3)$

Here fib(4) and fib(3) are subproblem to the original problem fib(5).

**Step 2.**

$fib(4) = fib(3) + fib(2)$

similarly going on we can form a tree like this.

here you can see that in the evaluation process of the **fib(5)** we are calculating **fib(3)** twice which in turn calculates **fib(2)** thrice. So instead of calculating **fib(2)** and **fib(3)** again and again we can just store the value of them when we calculated them first time and use them whenever needd in the future.

Now let's write the two solutions for this.

First be the simple solution with overlapping subproblems

```
def fib(n):
if(n <= 1 ):
return n
return fib(n-1) + fib(n-2)
```

Now let's take a look at the dynamic programming solution.

```
def fib(n):
store = []*(n+1)
store[0], store[1] = 1, 1
for in range(2,n+1):
store[i] = store[i-1] + store[i-2]
return store[n]
```

Now let's look at the comparison of time taken by both code.

**Sub-Problems vs Overlapping sub-problems**

When sub-problems are needed again and again they are categorized as overlapping subproblem. Storing the values of overlapping sub-problems optimizes out solution as we avoid computation needed to compute the same sub-problem again and again.

Like in above example of fibonacci numbers the subproblem **fib(2)** is computed 3 times.

We do not gain anything by storing results for non-overlapping subproblems like in binary_search there is sense of storing the results for sub-array since that sub-problem will not be used again.

Dynamic programming is used to solve problems which have overlapping subproblems.

**Example Problem**

For further reading we will use this problem as reference.

Given a **MxN** board, We need to find the number of ways to reach cell **B** starting from cell **A**. From cell (x,y) you can move to cell (x+1,y) or cell (x,y+1) ( as long as you dont cross boundaries)

.

For example a 2x2 board there are 6 ways.

**Identify a Dynamic Programming Problem**

Most of the time Dynamic Programming problems fall in two catgories.

**1. Optimization Problem.**

**2. Combinatorial problem.**

Another hint you will see is that the solution to the original problem depends on smaller sub-problems as those sub-problems will be overlapping sub-problems.

As our example grid problem is a Combinatorial problem.

let's examine the exmaple problem for overlapping sub-problem. let's take an example of 3x3 grid.

Let's define a function

func(x,y) = number of ways to reach cell (x,y).

Now for cell (x,y) we know that only way to reach (x,y) is from cell (x-1,y) or cell (x,y-1) hence we can say the number of ways to reach cell (x,y) is sum of number of ways to reach cell (x,y-1) and (x-1,y)

func(x,y) = func(x,y-1) + func(x-1,y)

func(x,y-1) =func(x-1,y-1)+ func(x,y-1)

func(x-1,y) = func(x-2,y) +func(x-1, y-1)

we can clearly see that sub-problem func(x-1,y-1) is repeated, we will discover more and more repeated terms if we keep going like this on and on. Now since our problem do have overlapping sub problem hence it's a candidate for Dynamic programming problem.

**Solving a Dynamic Programming Problem**

DP "State" is minimal information which can uniquely define the sub-problem for original problem.

As in our problem pair (x,y) is Dp state as it uniquely indentify the sub-problem which neither of (x) or (y) can.

Every Dynamic Programming problem has a schema to be followed:

1.Break down into optimal sub-problems and find Dp state.

2.Express problem in terms of smaller sub-problems.

3.Compute optimal solution in bottom-up fashion.

4.Construct an optimal solution from the computed information.

**Top Down with Memoization:**

taken from here

You assume you have already computed all subproblems. You have no idea about the optimal evaluation order. You typically perform a recursive call (or some iterative equivalent) from the root, and either hope you will get close to the optimal evaluation order, or you have a proof that you will get the optimal evaluation order. You ensure that the recursive call never recomputes a subproblem because you cache the results, and thus duplicate sub-trees are not recomputed.

```
def count(x, y, store):
# check if subproblem (x,y) is already calculated
if( (x,y) in store):
return store[(x,y)]
# if outside of the board
if(x < 0 or y < 0 ):
return 0
# Base case of starting cell
if( x == 0 and y == 0 ):
return 1
# ans in terms of subproblems
ans = count(x,y-1,store) + count(x-1,y,store)
# Store the result for future
store[(x,y)] = ans
return ans
```

**Bottom up:**

You can also think of it as a "table-filling" algorithm (though usually multidimensional, this 'table' may have non-Euclidean geometry in very rare cases*). This is like memoization but more active, and involves one additional step: You must pick, ahead of time, exactly in which order you will do your computations (this is not to imply the order must be static, but that you have much more flexibility than memoization).

```
int count(int m, int n)
{
int dp[m][n];
// fill first column
for (int i = 0; i < m; i++)
dp[i][0] = 1;
// fill first row
for (int j = 0; j < n; j++)
dp[0][j] = 1;
// dp transitions.
for (int i = 1; i < m; i++)
{
for (int j = 1; j < n; j++)
dp[i][j] = dp[i-1][j] + dp[i][j-1];
}
return dp[m-1][n-1];
}
```

Stay updated for next post in which we will solve Dynamic programming problems asked in Google interview.