Dynamic Programming

Overview

Dynamic programming is a powerful problem-solving technique (an algorithmic paradigm) that simplifies complex problems by breaking them into smaller, manageable subproblems. It involves solving these subproblems just once and efficiently storing their solutions using a memory-based data structure like an array or map. Each subproblem’s solution is associated with an index, often based on its input parameters, enabling easy retrieval. This way, when encountering the same subproblem in the future, instead of recalculating its solution, one can simply retrieve the previously computed solution, thus conserving computational resources while incurring a potentially modest increase in storage space usage.

Characteristics

The first two characteristics are the reasons to justify the use of dynamic programming.

  1. Optimal Substructure: Dynamic programming breaks down a problem into smaller, interconnected subproblems. The optimal solution can be constructed by combining the optimal solutions of its smaller, interconnected subproblems.

  2. Overlapping Subproblems: Dynamic programming identifies that the same subproblems are often encountered multiple times while solving a problem. To avoid redundant calculations, it stores and reuses the solutions to these subproblems.

  3. Memoization: To store the solutions to subproblems, dynamic programming uses memoization techniques, such as arrays or maps. This allows for quick retrieval of previously computed results when needed.

  4. Recursive Formulation: Dynamic programming often involves a recursive formulation, where the solution to a larger problem depends on the solutions to smaller subproblems. These recursive relationships are used to derive the optimal solution.

  5. Bottom-Up or Top-Down Approach: Dynamic programming can be implemented using either a bottom-up approach, where solutions to smaller subproblems are computed first and combined to solve larger ones, or a top-down approach, which starts with the original problem and recursively solves its subproblems.

  6. Efficiency: While dynamic programming saves computational time by avoiding redundant calculations, it may require additional storage space to store intermediate results. The trade-off between time and space efficiency depends on the problem and the chosen implementation.

Applications

Dynamic programming can be employed to solve a wide range of well-known computational problems including:

  • Fibonacci numbers: computing the nth Fibonacci number.

  • Knapsack problem: finding the optimal combination of items to fit in a knapsack.

  • Longest common subsequence: finding the longest subsequence common to two sequences.

  • Integer partition: finding the ways to partition an integer.

  • Coin change: finding the optimal way to make change for a given amount and a set of coin denominations.

  • Rod cutting: finding the optimal way to cut a rod into smaller pieces and maximize the profit.

  • Matrix chain multiplication: finding the optimal way to multiply a chain of matrices.

Quick Exercises: + Can you categorize the above problems by their problem types? + Why dynamic programming is not a good fit for bin packing problems?

Benefits

  • Optimal Solutions: Dynamic programming guarantees finding the optimal solution to a problem.

  • Efficiency: It reduces redundant calculations by storing and reusing solutions to subproblems, making it more efficient than brute force approaches.

  • Versatility: Dynamic programming can be applied to a wide range of problems across various disciplines.

  • Clarity: It often leads to clean and structured algorithms that are easier to understand and implement.

Challenges

  • Memory Usage: Storing solutions to subproblems may require substantial memory, especially for problems with many subproblems.

  • Identifying Subproblems: Determining the right subproblems and their relationships can be complex in some cases.

  • Algorithm Design: Designing a dynamic programming algorithm can be challenging and may require a deep understanding of the problem domain.

  • Time Complexity: While dynamic programming reduces time complexity in many cases, as an exact algorithmic paradigm, it can still have very high complexity.

Approaches

  • Top-down approach: This approach is also known as memoization. It starts with the original problem and recursively breaks it down into smaller subproblems. It then stores and reuses the solutions to these subproblems to solve the original problem.

  • Bottom-up approach: This approach is also known as tabulation. It starts with the smallest subproblems and iteratively combines their solutions to solve larger subproblems until the original problem is solved.

Complexity Analysis

Time complexity

Although dynamic programming can be defined in terms of a recursive algorithm, the complexity is not the same as that of a divide and conquer algorithm. The reason is that solving subproblem may not take the same amount of time as some of the subproblems may have been solved before. Therefore, the time complexity of a dynamic programming algorithm is not simply the number of subproblems multiplied by the time to solve each subproblem. Instead, it is the number of distinct subproblems times the time to solve each subproblem.

For example, the time complexity of the Fibonacci number algorithm is \(\Theta(n)\) as there are \(n\) distinct subproblems and each subproblem takes \(\Theta(1)\) time to solve.

Top-down approach and bottom-up approach can be more efficient than each other depending on the problem.

Space complexity

As dynamic programming algorithms store the solutions to subproblems, they require additional memory to store the solutions. Therefore, the space complexity of a dynamic programming algorithm is the size of the table used to store the solutions to subproblems.