Heaps Visually Explained (Priority Queues)
Channel: ByteQuest
Full video link:
https://www.youtube.com/watch?v=XycnarZEBvQData Structure • hard
Priority queue (usually implemented via heap) always exposes min or max element quickly.
It is ideal when you repeatedly need the next-best candidate rather than full sorting every step.
Many scheduling and shortest-path algorithms depend on priority queues for practical speed.
Typical Complexity Baseline
| Metric | Value |
|---|---|
| Push/pop | O(log n) |
| peek | O(1) |
Prefer video learning? This explainer gives a quick visual walkthrough of the core idea before you dive into the detailed sections below.
Heaps Visually Explained (Priority Queues)
Channel: ByteQuest
Full video link:
https://www.youtube.com/watch?v=XycnarZEBvQLearn the core building blocks and terminology in one place before comparisons, so the mechanics are clear and duplicates are removed.
Heap array
What it is: Compact tree representation in contiguous array.
Why it matters: Heap array is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
Comparator
What it is: Defines min-heap or max-heap ordering.
Why it matters: Comparator is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
Sift-up
What it is: Restore heap property after insert.
Why it matters: Sift-up is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
Sift-down
What it is: Restore heap property after pop/replace.
Why it matters: Sift-down is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
Heap invariant
What it is: Parent priority dominates children.
Why it matters: Heap invariant is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
Min-heap
What it is: Smallest key stored at root/top.
Why it matters: Smallest key stored at root/top. In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.
Max-heap
What it is: Largest key stored at root/top.
Why it matters: Largest key stored at root/top. In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.
Heapify
What it is: Build heap from unsorted array in O(n).
Why it matters: Build heap from unsorted array in O(n). In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.
Top-k
What it is: Return k best elements under ranking rule.
Why it matters: Return k best elements under ranking rule. In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.
Decrease-key
What it is: Priority update operation in some heap variants.
Why it matters: Priority update operation in some heap variants. In Heap / Priority Queue, this definition helps you reason about correctness and complexity when inputs scale.
This walkthrough connects the core concepts of Heap / Priority Queue into one end-to-end execution flow.
Step 1
Compact tree representation in contiguous array.
Before
Heap array
After
Heap array
Transition
Why this step matters: Heap array is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
Step 2
Defines min-heap or max-heap ordering.
Before
Heap array
After
Heap array
Transition
Why this step matters: Comparator is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
Step 3
Restore heap property after insert.
Before
After
Transition
Why this step matters: Sift-up is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
Step 4
Restore heap property after pop/replace.
Before
After
Transition
Why this step matters: Sift-down is a required building block for understanding how Heap / Priority Queue stays correct and performant on large inputs.
vs Sorted array
When to choose this: Choose heap for interleaved insert + extract-top operations.
Tradeoff: Sorted array is simpler if data is static and query-only.
vs Hash map
When to choose this: Choose heap for ordered-by-priority retrieval.
Tradeoff: Hash map cannot return min/max efficiently.
vs Balanced tree
When to choose this: Choose heap for minimal overhead top extraction.
Tradeoff: Tree supports ordered iteration and arbitrary deletions better.
Kubernetes
Scheduler-style systems rank candidate nodes/tasks and repeatedly choose the highest-priority next action.
Takeaway: Priority queues formalize best-next selection under changing load.
Google Maps
Shortest path variants use min-priority queue to expand nearest frontier first.
Takeaway: Heap-driven frontier selection keeps path search efficient.
Apache Kafka ecosystem
Some stream processing jobs use priority queues for watermark/event-time ordering.
Takeaway: Priority structures help enforce processing order constraints.
Implement the heap array and sift-up/sift-down operations yourself so enqueue/dequeue behavior is explicit.
Complexity: Push O(log n), Pop O(log n), Peek O(1)
Min-heap priority queue from scratch
class MinHeap {
private values: number[] = []
push(value: number): void {
this.values.push(value)
this.siftUp(this.values.length - 1)
}
pop(): number | null {
if (this.values.length === 0) {
return null
}
const rootValue = this.values[0]
const lastValue = this.values.pop()!
if (this.values.length > 0) {
this.values[0] = lastValue
this.siftDown(0)
}
return rootValue
}
peek(): number | null {
return this.values.length > 0 ? this.values[0] : null
}
private siftUp(startIndex: number): void {
let childIndex = startIndex
while (childIndex > 0) {
const parentIndex = Math.floor((childIndex - 1) / 2)
if (this.values[parentIndex] <= this.values[childIndex]) {
break
}
;[this.values[parentIndex], this.values[childIndex]] = [this.values[childIndex], this.values[parentIndex]]
childIndex = parentIndex
}
}
private siftDown(startIndex: number): void {
let parentIndex = startIndex
while (true) {
const leftChildIndex = parentIndex * 2 + 1
const rightChildIndex = parentIndex * 2 + 2
let smallestIndex = parentIndex
if (leftChildIndex < this.values.length && this.values[leftChildIndex] < this.values[smallestIndex]) {
smallestIndex = leftChildIndex
}
if (rightChildIndex < this.values.length && this.values[rightChildIndex] < this.values[smallestIndex]) {
smallestIndex = rightChildIndex
}
if (smallestIndex === parentIndex) {
break
}
;[this.values[parentIndex], this.values[smallestIndex]] = [this.values[smallestIndex], this.values[parentIndex]]
parentIndex = smallestIndex
}
}
}
Use these signals to decide if this data structure/algorithm is the right fit before implementation.
O(1) with insert/remove O(log n).| # | Problem | Difficulty | Typical Complexity |
|---|---|---|---|
| 1 | Kth Largest Element in a Stream | Easy | O(log k) |
| 2 | Last Stone Weight | Easy | O(n log n) |
| 3 | Top K Frequent Elements | Medium | O(n log k) |
| 4 | K Closest Points to Origin | Medium | O(n log k) |
| 5 | Task Scheduler | Medium | O(n log n) |
| 6 | Find K Pairs with Smallest Sums | Medium | O(k log k) |
| 7 | Merge k Sorted Lists | Hard | O(n log k) |
| 8 | Find Median from Data Stream | Hard | O(log n) |
| 9 | IPO | Hard | O(n log n) |
| 10 | Minimum Number of Refueling Stops | Hard | O(n log n) |