What Is Dynamic Programming and How To Use It
Channel: CS Dojo
Full video link:
https://www.youtube.com/watch?v=vYquumk4nWwAlgorithm • hard
Dynamic programming solves problems by reusing answers to overlapping subproblems.
The workflow is: define state, define recurrence, define base case, compute in dependency-safe order.
It often converts exponential brute force into polynomial runtime.
Typical Complexity Baseline
| Metric | Value |
|---|---|
| Complexity Note 1 | Varies |
| Complexity Note 2 | often reduces exponential to polynomial |
Prefer video learning? This explainer gives a quick visual walkthrough of the core idea before you dive into the detailed sections below.
What Is Dynamic Programming and How To Use It
Channel: CS Dojo
Full video link:
https://www.youtube.com/watch?v=vYquumk4nWwSee subproblem reuse by filling a memo/table instead of recomputing recursively.
Define state and base cases first.
Problem
Fibonacci(6)
State
dp[i] = Fibonacci(i)
Base Cases
dp[0]=0, dp[1]=1
Transition
dp[i] = dp[i-1] + dp[i-2]
Learn the core building blocks and terminology in one place before comparisons, so the mechanics are clear and duplicates are removed.
State
What it is: Minimal variables that uniquely represent a subproblem.
Why it matters: State is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
Transition
What it is: Formula to build current state from smaller states.
Why it matters: Transition is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
Base case
What it is: Known trivial answers anchoring recurrence.
Why it matters: Base case is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
Memo table
What it is: Cache for top-down or table for bottom-up approach.
Why it matters: Memo table is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
Iteration order
What it is: Order that ensures dependencies are already computed.
Why it matters: Iteration order is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
Overlapping subproblems
What it is: Same subproblem appears many times in recursion tree.
Why it matters: Same subproblem appears many times in recursion tree. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
Optimal substructure
What it is: Optimal solution can be built from optimal sub-solutions.
Why it matters: Optimal solution can be built from optimal sub-solutions. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
Memoization
What it is: Top-down recursion + cache.
Why it matters: Top-down recursion + cache. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
Tabulation
What it is: Bottom-up iterative table filling.
Why it matters: Bottom-up iterative table filling. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
State compression
What it is: Reduce memory by storing only needed recent rows/columns.
Why it matters: Reduce memory by storing only needed recent rows/columns. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
This walkthrough connects the core concepts of Dynamic Programming into one end-to-end execution flow.
Step 1
Minimal variables that uniquely represent a subproblem.
Before
Define dp
After
Transition
Why this step matters: State is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
Step 2
Formula to build current state from smaller states.
Before
After
Partial table
Transition
Why this step matters: Transition is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
Step 3
Known trivial answers anchoring recurrence.
Before
Need final answer at dp
After
Transition
Why this step matters: Base case is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
Step 4
Cache for top-down or table for bottom-up approach.
Before
After
Transition
Why this step matters: Memo table is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
vs Greedy
When to choose this: Choose DP when local best choice does not guarantee global optimum.
Tradeoff: Greedy is simpler and faster when valid.
vs Backtracking
When to choose this: Choose DP when subproblems overlap significantly.
Tradeoff: Backtracking may be simpler for constraint search without overlap.
vs Divide and conquer
When to choose this: Choose DP when branches reuse same states.
Tradeoff: Divide-and-conquer fits independent subproblems better.
Speech and sequence models historically relied on DP-style decoding and alignment methods.
Takeaway: DP remains practical in sequence optimization pipelines.
Bioinformatics platforms
DNA/protein alignment uses classic DP matrices at scale.
Takeaway: DP powers core scientific compute workflows.
Fintech analytics
Portfolio and allocation optimization often combines DP-style dynamic constraints.
Takeaway: DP is valuable when decisions depend on prior cumulative state.
Classic 1D DP where state[i] depends on previous two states.
Complexity: Time O(n), Space O(1) with compression
Climbing stairs DP
function climbStairs(stepCount: number): number {
if (stepCount <= 2) return stepCount
let oneStepBefore = 2
let twoStepsBefore = 1
for (let stepIndex = 3; stepIndex <= stepCount; stepIndex += 1) {
const currentWays = oneStepBefore + twoStepsBefore
twoStepsBefore = oneStepBefore
oneStepBefore = currentWays
}
return oneStepBefore
}
Use these signals to decide if this data structure/algorithm is the right fit before implementation.
| # | Problem | Difficulty | Typical Complexity |
|---|---|---|---|
| 1 | Climbing Stairs | Easy | O(n) |
| 2 | Min Cost Climbing Stairs | Easy | O(n) |
| 3 | House Robber | Medium | O(n) |
| 4 | Coin Change | Medium | O(amount * coins) |
| 5 | Longest Increasing Subsequence | Medium | O(n log n) |
| 6 | Unique Paths | Medium | O(m*n) |
| 7 | Partition Equal Subset Sum | Medium | O(n*sum) |
| 8 | Edit Distance | Hard | O(m*n) |
| 9 | Burst Balloons | Hard | O(n^3) |
| 10 | Distinct Subsequences | Hard | O(m*n) |