Back to DSA Guide Catalog

Algorithmhard

Dynamic Programming Complete Guide

Dynamic programming solves problems by reusing answers to overlapping subproblems.

The workflow is: define state, define recurrence, define base case, compute in dependency-safe order.

It often converts exponential brute force into polynomial runtime.

Typical Complexity Baseline

MetricValue
Complexity Note 1Varies
Complexity Note 2often reduces exponential to polynomial

Video Explainer

Prefer video learning? This explainer gives a quick visual walkthrough of the core idea before you dive into the detailed sections below.

Dynamic Programming Visualizer

See subproblem reuse by filling a memo/table instead of recomputing recursively.

Dynamic Programming VisualizerStep 1 / 3

Define state and base cases first.

Problem

Fibonacci(6)

State

dp[i] = Fibonacci(i)

Base Cases

dp[0]=0, dp[1]=1

Transition

dp[i] = dp[i-1] + dp[i-2]

Core Concepts

Learn the core building blocks and terminology in one place before comparisons, so the mechanics are clear and duplicates are removed.

State

What it is: Minimal variables that uniquely represent a subproblem.

Why it matters: State is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Transition

What it is: Formula to build current state from smaller states.

Why it matters: Transition is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Base case

What it is: Known trivial answers anchoring recurrence.

Why it matters: Base case is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Memo table

What it is: Cache for top-down or table for bottom-up approach.

Why it matters: Memo table is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Iteration order

What it is: Order that ensures dependencies are already computed.

Why it matters: Iteration order is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Overlapping subproblems

What it is: Same subproblem appears many times in recursion tree.

Why it matters: Same subproblem appears many times in recursion tree. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

Optimal substructure

What it is: Optimal solution can be built from optimal sub-solutions.

Why it matters: Optimal solution can be built from optimal sub-solutions. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

Memoization

What it is: Top-down recursion + cache.

Why it matters: Top-down recursion + cache. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

Tabulation

What it is: Bottom-up iterative table filling.

Why it matters: Bottom-up iterative table filling. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

State compression

What it is: Reduce memory by storing only needed recent rows/columns.

Why it matters: Reduce memory by storing only needed recent rows/columns. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

Putting It All Together

This walkthrough connects the core concepts of Dynamic Programming into one end-to-end execution flow.

Step 1

State

Minimal variables that uniquely represent a subproblem.

Before

Define dp

i
  • = ways to reach i
  • Problem: climb stairs n=5

After

  • State definition and recurrence fixed
  • Ready to fill table

Transition

Base cases: dp[0]=1, dp[1]=1
Transition: dp[i]=dp[i-1]+dp[i-2]

Why this step matters: State is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Step 2

Transition

Formula to build current state from smaller states.

Before

  • DP table initialized
  • i=2 next

After

Partial table

1123...
  • Subproblems reused

Transition

Compute dp[2]=2
Compute dp[3]=3

Why this step matters: Transition is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Step 3

Base case

Known trivial answers anchoring recurrence.

Before

Need final answer at dp

5
  • Continue fill i=4..5

After

  • Answer = 8
  • No exponential recomputation

Transition

dp[4]=5
dp[5]=8

Why this step matters: Base case is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Step 4

Memo table

Cache for top-down or table for bottom-up approach.

Before

  • Space optimization opportunity
  • Only previous 2 states needed

After

  • Same correctness
  • Space reduced from O(n) to O(1)

Transition

Keep rolling variables prev2, prev1
Update per iteration

Why this step matters: Memo table is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

How It Compares

vs Greedy

When to choose this: Choose DP when local best choice does not guarantee global optimum.

Tradeoff: Greedy is simpler and faster when valid.

vs Backtracking

When to choose this: Choose DP when subproblems overlap significantly.

Tradeoff: Backtracking may be simpler for constraint search without overlap.

vs Divide and conquer

When to choose this: Choose DP when branches reuse same states.

Tradeoff: Divide-and-conquer fits independent subproblems better.

Real-World Stories and Company Examples

Google

Speech and sequence models historically relied on DP-style decoding and alignment methods.

Takeaway: DP remains practical in sequence optimization pipelines.

Bioinformatics platforms

DNA/protein alignment uses classic DP matrices at scale.

Takeaway: DP powers core scientific compute workflows.

Fintech analytics

Portfolio and allocation optimization often combines DP-style dynamic constraints.

Takeaway: DP is valuable when decisions depend on prior cumulative state.

Implementation Guide

Classic 1D DP where state[i] depends on previous two states.

Complexity: Time O(n), Space O(1) with compression

Climbing stairs DP

function climbStairs(stepCount: number): number {
  if (stepCount <= 2) return stepCount

  let oneStepBefore = 2
  let twoStepsBefore = 1

  for (let stepIndex = 3; stepIndex <= stepCount; stepIndex += 1) {
    const currentWays = oneStepBefore + twoStepsBefore
    twoStepsBefore = oneStepBefore
    oneStepBefore = currentWays
  }

  return oneStepBefore
}

Common Problems and Failure Modes

  • State definition too large causing memory/time explosion.
  • Wrong transition missing valid predecessor states.
  • Forgetting base case leading to undefined values.
  • Computing states in wrong order.
  • Not testing small hand-computable examples.

Tips and Tricks

  • Define state in one sentence before writing any transition logic.
  • Start with recursion + memoization, then convert to tabulation if needed.
  • Base cases should be explicit and tested in isolation.
  • If DP table is too large, look for rolling-array/state-compression opportunities.

When to Use

Use these signals to decide if this data structure/algorithm is the right fit before implementation.

Real-system usage signals

  • Eliminates repeated computation across overlapping subproblems.
  • Captures global optimum via local transitions and memoized state.
  • Critical for sequence alignment, knapsack-style optimization, and path counting.

LeetCode-specific tips (including pattern-identification signals)

  • Identification signal: brute-force recursion repeats the same subproblems across branches.
  • Identification signal: objective asks min/max/count/ways under constraints where previous states determine next state.
  • If you can define state + transition + base case cleanly, DP is likely the right approach.
  • For Dynamic Programming questions, start by naming the core invariant before writing code.
  • Use the constraint section to set time/space target first, then pick the data structure/algorithm.
  • Solve one tiny example by hand and map each transition to your variables before implementing.
  • Run adversarial cases: empty input, duplicates, max-size input, and sorted/reverse patterns when relevant.
  • During interviews, explain why your approach is the right pattern for this prompt, not just why the code works.

LeetCode Progression (Easy to Hard)

Back to DSA Guide Catalog
#ProblemDifficultyTypical Complexity
1Climbing StairsEasyO(n)
2Min Cost Climbing StairsEasyO(n)
3House RobberMediumO(n)
4Coin ChangeMediumO(amount * coins)
5Longest Increasing SubsequenceMediumO(n log n)
6Unique PathsMediumO(m*n)
7Partition Equal Subset SumMediumO(n*sum)
8Edit DistanceHardO(m*n)
9Burst BalloonsHardO(n^3)
10Distinct SubsequencesHardO(m*n)