Back to DSA Guide Catalog

Data Structurehard

Heap / Priority Queue Complete Guide

Priority queue (usually implemented via heap) always exposes min or max element quickly.

It is ideal when you repeatedly need the next-best candidate rather than full sorting every step.

Many scheduling and shortest-path algorithms depend on priority queues for practical speed.

Typical Complexity Baseline

MetricValue
Push/popO(log n)
peekO(1)

Video Explainer

Prefer video learning? This explainer gives a quick visual walkthrough of the core idea before you dive into the detailed sections below.

Core Concepts

Learn the core building blocks and terminology in one place before comparisons, so the mechanics are clear and duplicates are removed.

Heap array

What it is: Compact tree representation in contiguous array.

Why it matters: Heap array is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

Comparator

What it is: Defines min-heap or max-heap ordering.

Why it matters: Comparator is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

Sift-up

What it is: Restore heap property after insert.

Why it matters: Sift-up is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

Sift-down

What it is: Restore heap property after pop/replace.

Why it matters: Sift-down is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

Heap invariant

What it is: Parent priority dominates children.

Why it matters: Heap invariant is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

Min-heap

What it is: Smallest key stored at root/top.

Why it matters: Smallest key stored at root/top. In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.

Max-heap

What it is: Largest key stored at root/top.

Why it matters: Largest key stored at root/top. In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.

Heapify

What it is: Build heap from unsorted array in O(n).

Why it matters: Build heap from unsorted array in O(n). In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.

Top-k

What it is: Return k best elements under ranking rule.

Why it matters: Return k best elements under ranking rule. In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.

Decrease-key

What it is: Priority update operation in some heap variants.

Why it matters: Priority update operation in some heap variants. In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.

Putting It All Together

This walkthrough connects the core concepts of Heap / Priority Queue into one end-to-end execution flow.

Step 1

Heap array

Compact tree representation in contiguous array.

Before

Heap array

empty
  • Push priorities 5, 2, 8

After

Heap array

258
  • Peek gives smallest in O(1)

Transition

Insert at end
Bubble up while parent is larger (min-heap)

Why this step matters: Heap array is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

Step 2

Comparator

Defines min-heap or max-heap ordering.

Before

Heap array

25811
  • Operation: pop()

After

Heap array

5118
  • Popped: 2

Transition

Remove root
Move last to root and sift down

Why this step matters: Comparator is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

Step 3

Sift-up

Restore heap property after insert.

Before

  • Stream of values
  • Need top 3 largest

After

  • Heap stores best 3 candidates
  • Efficient O(n log k)

Transition

Keep min-heap size 3
Pop when size exceeds 3

Why this step matters: Sift-up is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

Step 4

Sift-down

Restore heap property after pop/replace.

Before

  • Dijkstra frontier entries
  • Each node has current distance

After

  • Next relaxation starts from cheapest node
  • Shortest-path invariant maintained

Transition

Always pop smallest distance
Skip stale entries

Why this step matters: Sift-down is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.

How It Compares

vs Sorted array

When to choose this: Choose heap for interleaved insert + extract-top operations.

Tradeoff: Sorted array is simpler if data is static and query-only.

vs Hash map

When to choose this: Choose heap for ordered-by-priority retrieval.

Tradeoff: Hash map cannot return min/max efficiently.

vs Balanced tree

When to choose this: Choose heap for minimal overhead top extraction.

Tradeoff: Tree supports ordered iteration and arbitrary deletions better.

Real-World Stories and Company Examples

Kubernetes

Scheduler-style systems rank candidate nodes/tasks and repeatedly choose the highest-priority next action.

Takeaway: Priority queues formalize best-next selection under changing load.

Google Maps

Shortest path variants use min-priority queue to expand nearest frontier first.

Takeaway: Heap-driven frontier selection keeps path search efficient.

Apache Kafka ecosystem

Some stream processing jobs use priority queues for watermark/event-time ordering.

Takeaway: Priority structures help enforce processing order constraints.

Implementation Guide

Implement the heap array and sift-up/sift-down operations yourself so enqueue/dequeue behavior is explicit.

Complexity: Push O(log n), Pop O(log n), Peek O(1)

Min-heap priority queue from scratch

class MinHeap {
  private values: number[] = []

  push(value: number): void {
    this.values.push(value)
    this.siftUp(this.values.length - 1)
  }

  pop(): number | null {
    if (this.values.length === 0) {
      return null
    }

    const rootValue = this.values[0]
    const lastValue = this.values.pop()!
    if (this.values.length > 0) {
      this.values[0] = lastValue
      this.siftDown(0)
    }

    return rootValue
  }

  peek(): number | null {
    return this.values.length > 0 ? this.values[0] : null
  }

  private siftUp(startIndex: number): void {
    let childIndex = startIndex
    while (childIndex > 0) {
      const parentIndex = Math.floor((childIndex - 1) / 2)
      if (this.values[parentIndex] <= this.values[childIndex]) {
        break
      }
      ;[this.values[parentIndex], this.values[childIndex]] = [this.values[childIndex], this.values[parentIndex]]
      childIndex = parentIndex
    }
  }

  private siftDown(startIndex: number): void {
    let parentIndex = startIndex

    while (true) {
      const leftChildIndex = parentIndex * 2 + 1
      const rightChildIndex = parentIndex * 2 + 2
      let smallestIndex = parentIndex

      if (leftChildIndex < this.values.length && this.values[leftChildIndex] < this.values[smallestIndex]) {
        smallestIndex = leftChildIndex
      }
      if (rightChildIndex < this.values.length && this.values[rightChildIndex] < this.values[smallestIndex]) {
        smallestIndex = rightChildIndex
      }
      if (smallestIndex === parentIndex) {
        break
      }

      ;[this.values[parentIndex], this.values[smallestIndex]] = [this.values[smallestIndex], this.values[parentIndex]]
      parentIndex = smallestIndex
    }
  }
}

Common Problems and Failure Modes

  • Using wrong comparator direction (min vs max).
  • Forgetting stale entries when priorities update externally.
  • Assuming heap provides full sorted traversal.
  • Inefficient repeated heap rebuilds instead of incremental push/pop.
  • Not bounding heap growth in stream systems.

Tips and Tricks

  • Use heap when you repeatedly need the next-best element, not full sorting.
  • For top-k, min-heap of size k is usually the most memory-efficient pattern.
  • When priorities can update, handle stale entries instead of mutating in place.
  • Pick min-heap vs max-heap explicitly before implementing comparisons.

When to Use

Use these signals to decide if this data structure/algorithm is the right fit before implementation.

Real-system usage signals

  • Fast top element access O(1) with insert/remove O(log n).
  • Efficient for top-k, stream merging, and task scheduling.
  • Reduces repeated full-sort overhead in iterative workflows.

LeetCode-specific tips (including pattern-identification signals)

  • Identification signal: repeatedly choose the current smallest/largest or next-best candidate while new items arrive.
  • Identification signal: top-k or streaming ranking appears, where full re-sort each step is too expensive.
  • Scheduling, shortest-path frontier expansion, and merge-k workflows usually point to heap/priority-queue.
  • For Heap / Priority Queue questions, start by naming the core invariant before writing code.
  • Use the constraint section to set time/space target first, then pick the data structure/algorithm.
  • Solve one tiny example by hand and map each transition to your variables before implementing.
  • Run adversarial cases: empty input, duplicates, max-size input, and sorted/reverse patterns when relevant.
  • During interviews, explain why your approach is the right pattern for this prompt, not just why the code works.

LeetCode Progression (Easy to Hard)

Back to DSA Guide Catalog
#ProblemDifficultyTypical Complexity
1Kth Largest Element in a StreamEasyO(log k)
2Last Stone WeightEasyO(n log n)
3Top K Frequent ElementsMediumO(n log k)
4K Closest Points to OriginMediumO(n log k)
5Task SchedulerMediumO(n log n)
6Find K Pairs with Smallest SumsMediumO(k log k)
7Merge k Sorted ListsHardO(n log k)
8Find Median from Data StreamHardO(log n)
9IPOHardO(n log n)
10Minimum Number of Refueling StopsHardO(n log n)