DP — Introduction to Dynamic Programming: Part 1

Sethuram.S.V
5 min readJul 12, 2021

Dynamic programming is an algorithmic approach for solving an optimization problem by breaking it down into simpler subproblems and utilizing the fact that the optimal solution to the overall problem depends upon the optimal solution of its subproblems. And trust me it was supposed to mean gibberish when you’re new to dynamic programming. So, instead of going through all the definitions and methods in DP in one go, let’s start with an example.

Let’s take the infamous example for learning DP, which is the Fibonacci sequence. Fibonacci sequence is a sequence such that each number is the sum of the two preceding ones in the sequence, starting from 0 and 1.

Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21 …

Note

Over here I have taken the example of the Fibonacci series, to explain the concepts, but it can be more effectively solved by an iterative algorithm in place of a recursive algorithm.

So, here is a simple recursive code to generate the nth term of the Fibonacci series. As you can see, we have defined our base case which is applicable when n either 0 or 1, for which we just return the number itself. In the recursive case, we return the sum of the previous 2 Fibonacci terms by making recursive calls to the function with parameters n - 1 and n - 2. And please do me a favor, try running this logic in the programming language of your choice so that you could understand the essence of what I’m trying to convey in the upcoming paragraphs.

Well, if you had run the program you would be wondering, ‘Well I have got my answers for the first 2 values of n whereas the answer for the 3rd test case doesn’t seem to be printed, maybe the code is wrong!’ No, it’s just the program hasn’t completed the 2¹⁰⁰ steps it takes to compute the value. Yeah for real I’m not kidding, the time complexity of the above code is O(2ⁿ).

So, you may wonder if the time complexity is exponential, why would I ever need this brute algorithm as it wouldn’t be plausible for the algorithm to compute answers, for any substantially large n values. Yeah, it’s true but let’s not focus upon the algorithm, let’s focus on the recursion itself for a minute or two.

Well, here is the recursion tree for the above logic with n = 5 as an example. Take a look at it, and observe the pattern in the recursive calls.

recursion tree for the above program with n = 5

Well, did you notice it? That many recursive calls are being repeated across branches. Yeah, that’s the point I was looking forward to convey through the brute recursive algorithm. It’s the overlapping subproblems characteristic of a dynamic programming approach. You could once again take look at the recursive tree, as now you have the idea of what does overlapping subproblems mean.

recursion tree highlighting the overlapping subproblems

Since we had discussed one of the characteristics of DP, I would like to introduce the second characteristic of a DP-based problem, which is optimal substructure property. Think of a problem’s optimal solution that can be constructed from the optimal solutions of its subproblems. To put in context, in our Fibonacci sequence we used the property of the sequence, that each term in the sequence is the sum of the preceding two terms.

fib(n) = fib(n - 1) + fib(n - 2)

So, as you can see we’re getting the nth Fibonacci term by breaking it down into subproblems which sum of n-1th and n-2th term in the Fibonacci sequence. This is called the optimal substructure property of DP.

Okay, let's take a break from the terminologies of DP and return to the problem at hand. As we saw there are many subproblems where the recursive calls are being repeated. Now let’s can capitalize on our knowledge, as we know there is going to be no change in the ith term in the Fibonacci term, no matter how many times it’s calculated. So, why not store the answers to the subproblems somewhere, so we can provide the answer to the recursive calls which are about to be repeated.

Well, there’s this thing called HashMap or dictionary (a hash table data structure), so we can store the result for the subproblems once it’s computed in the hash table, and before making a recursive call let’s take a look at the hash table if it has been computed.

So, bear with me please try running the logic given below in the language of your choice.

Did you see the difference? Yeah of course you would have this time around fib(100) would have been calculated, but did the hashmap help a lot in improvising the algorithm? Yes, it did the time complexity of the above code is O(n). Crazy right, introducing one single entity makes a huge difference in the time complexity. Well, you could see it for yourself in the recursion tree given below.

As you could see now, there are only 2*n calls made, as the results of previously calculated subproblems are got from the hash table(which is being highlighted in the blue color). So, the effective time complexity is O(n).

Well, this method of solving the problem is called memoization. To put in context, before making the recursive call to the subproblem we check whether the solution to the problem is in the hashmap, where we have cached in our results for the subproblems which have been solved earlier. So, we let know the program, just like a memo ‘Hey, I have this solution with me don’t burden yourself by doing the same work again and again!’

Memoization is one of the methods available in DP to solve a problem. This is called a top-down approach as we try to solve the bigger problem by recursively finding the solution to smaller sub-problems.

Quick Recap

  1. We saw the characteristics of dynamic programming - a) overlapping subproblems and b) optimal substructure, with the help of the recursive Fibonacci program.
  2. We saw about the top-down approach with memoization of dynamic programming, where we cached our solution to subproblems to avoid repeated recursive calls of the same subproblems.

So, we have come to the end of this post, and thank you so much 😊 if you have made it this far into this post. I will see you in my next post with the bottom-up approach — tabulation strategy of DP. Fun fact, if you look keenly, this blog was written in a bottom-up approach, we addressed the subproblems first and moved into the bigger picture by relating the ideas to dynamic programming.

References

  1. https://www.youtube.com/watch?v=oBt53YbR9Kk&t=5596s
  2. https://en.wikipedia.org/wiki/Dynamic_programming
  3. https://www.educative.io/courses/grokking-dynamic-programming-patterns-for-coding-interviews/m2G1pAq0OO0
  4. https://www.geeksforgeeks.org/tabulation-vs-memoization/

--

--