DP — Optimization Problem using Memoization

Sethuram.S.V
5 min readNov 1, 2021

In the previous blogs, we have seen how to solve the decision and combinatorics problem using DP. This time around, we’re going to see about solving an optimization problem using DP.

What is an optimization problem?

In mathematics and computer science, an optimization problem is a problem of finding the best solution from all feasible solutions. For example, what is the best way to obtain the sum 10 by using the array of elements consisting of {1, 2, 5}? The answer to that is {5, 5} as it is the best solution available among other solutions like for example {2,2,2,2,2} or {2,2,1, 5} and etc.,

DP vs Greedy?

You may wonder why should I go with DP when the above example could also be solved using greedy. Yep, the above example can be solved using a greedy approach. But the greedy approach necessarily won’t always end up with the best possible solution. For example, let’s take the target sum to be 35 which is to be formed from the numbers present in the array — {1,15,25}. The greedy algorithm chooses 25 greedily and five 1’s to arrive at the solution. Whereas the best solution is taking two15’s.

This is because, in a greedy algorithm, the strategy is to make whatever choice seems the best at the moment in the hope that it will lead to a global solution. Whereas, in DP we make decisions at each step considering the current problem and solution to previously solved subproblem to arrive at the global optimal solution.

Note

If you’re new to DP or need a quick check on some basic terminologies concepts especially on memoization, check out my blog over here or any other resources, as moving on with the concepts requires some familiarity with the basics of DP. The problem which I have chosen to go with is an expansion of the very same problem which I have discussed in my last blog as a combinatorics problem.

Now, let’s take an example of an optimization problem and try to solve it. Given an array of distinct integers candidates and a target sum, return the best combination(minimal number of candidates) of candidates where the chosen numbers sum to target. Given that,

  1. the same number may be chosen from candidates any number of times
  2. the target sum and numbers in the array are all positive integers.

Well, a lucid approach, will be a basic brute recursive solution:

  1. We can take a number from the array and subtract it from the target sum. Now, we recursively call the function with the new target sum.
  2. The base case can be made to check whether the target sum is equal to zero (the target sum is achievable) or less than zero (the target sum is not achievable).
  3. If the sum is equal to zero, we return an empty list where we’ll be adding the elements used to get there, during the return of each recursive call.
  4. Else, we return null.
  5. We compare the current list with the shortest list we previously had and if the current list is smaller, we update the shortest list to be the current list.
  6. We continue this process until we exhaust all possibilities.
pictorial representation of the brute recursive algorithm

As you could see above in the image, the algorithm arrives at a feasible solution — [4, 4] (the 2nd branch), yet it repeats the process once again with 8 as a candidate and now it discards the previous solution(shortest = [4,4]) as the current solution(combination = [8]) is shorter.

Well, the time complexity of the above code is O(m*nᵐ). So, the brute force algorithm is obsolete for real-world problems.

But the purpose of the recursive algorithm is to get the basic intuition on how to solve the problem and show the shortcomings of a basic brute recursive algorithm and how it can be overcome with the help of memoization.

Little exercise

I would like you guys to trace out the recursion tree for an example of your liking and try to observe the overlapping subproblems and optimal substructure property of our brute force algorithm.

If you’re stuck somewhere or need a little brush-up, you could read my introduction to dynamic programming with memoization:

So the idea is that if recursive calls with the same arguments are repeatedly made, then the inefficient recursive algorithm can be memoized by saving these subproblem solutions in a memo object(can be a hashmap, array, dictionary based on your language preference) so they do not have to be recomputed.

The time complexity of the above code is O(n*m²), which is fairly efficient to use for real-world problems. I would like for you all, to try to sketch out the recursion tree for the memoized code, to understand how memoization helps in the huge cut down in time complexity of the algorithm.

Quick Recap

  1. We saw about ‘What is an optimization problem?’ and an example problem to go with it.
  2. We saw about the greedy vs DP approach.
  3. We laid out a brute force algorithm to solve the problem.
  4. Then, we memoized the code, to make the brute force algorithm much more efficient.

What’s next?

I would like you all to try out the coin change problem, as it is a very similar concept to the problem which we have discussed. If you’re feeling confident with the top-down approach of solving the problems then, move on to solving similar or the same problem with the bottom-up approach.

--

--