Last modified: January 02, 2026
This article is written in: 🇺🇸
In the realm of computer science, 'sorting' refers to the process of arranging a collection of items in a specific, predetermined order. This order is based on certain criteria that are defined beforehand.
For instance:
Sorting isn't limited to just numbers and strings. Virtually any type of object can be sorted, provided there is a clear definition of order for them.
What makes sorting worth caring about is that it quietly powers “everyday” features: search bars, leaderboards, timelines, deduplicating data, grouping logs, ranking recommendations, and making reports readable. If you can sort efficiently (and correctly), you can often make a system feel instantly faster and more polished, because so many tasks become simpler once data is ordered.
Stability, in the context of sorting, refers to preserving the relative order of equal items in the sorted output. In simple terms, when two items have equal keys:
Stability sounds like a “detail,” but it becomes a superpower the moment you sort objects by multiple keys. A common pattern is: sort by the secondary key first, then sort by the primary key. That second pass only works as intended if the second sort is stable, because stability preserves the secondary ordering within groups that tie on the primary key.
Let’s take an analogous list of medieval knights (each with its original 0-based index):
[Lancelot0] [Gawain1] [Lancelot2] [Percival3] [Gawain4]
We’ll “sort” them by name, bringing all Lancelots to the front, then Gawain, then Percival.
A stable sort preserves the left-to-right order of equal items.
I. Bring every “Lancelot” to the front, in the order they appeared (index 0 before 2):
[Lancelot0] [Lancelot2] [Gawain1] [Percival3] [Gawain4]
II. Next, move the two “Gawain”s ahead of “Percival”, again preserving 1 before 4:
[Lancelot0] [Lancelot2] [Gawain1] [Gawain4] [Percival3]
So the stable-sorted sequence is:
[Lancelot0] [Lancelot2] [Gawain1] [Gawain4] [Percival3]
An unstable sort may reorder equal items arbitrarily.
I. When collecting the “Lancelot”s, it might pick index 2 before 0:
[Lancelot2] [Lancelot0] [Gawain1] [Percival3] [Gawain4]
II. Later, the two “Gawain”s might swap (4 before 1):
[Lancelot2] [Lancelot0] [Gawain4] [Gawain1] [Percival3]
So one possible unstable-sorted sequence is:
[Lancelot2] [Lancelot0] [Gawain4] [Gawain1] [Percival3]
If you then did a second pass (say, sorting by rank or battle-honors) you’d only want to reorder knights of different names, trusting that ties (same-name knights) are still in their intended original order, something only a stable sort guarantees.
A quick rule of thumb:
(name, age, score)), stability matters a lot more than people expect.And because many real systems sort records, stability is one of those “small” choices that saves you from weird bugs later.
Bubble sort is one of the simplest sorting algorithms. It is often used as an introductory algorithm because it is easy to understand, even though it is not efficient for large datasets.
The name comes from the way larger elements "bubble up" to the top (end of the list), just as bubbles rise in water.
The basic idea:
Bubble sort is valuable mostly as a teaching tool: it helps you build intuition for comparison-based sorting, swapping, and passes. In real programs, it’s rarely the best choice, but understanding why it’s slow teaches you what good sorting algorithms avoid (repeated full scans with tiny progress each pass).
Step-by-Step Walkthrough
Example Run
We will sort the array:
[ 5 ][ 1 ][ 4 ][ 2 ][ 8 ]
Pass 1
Compare adjacent pairs and push the largest to the end.
Initial: [ 5 ][ 1 ][ 4 ][ 2 ][ 8 ]
Compare 5 and 1 → swap
[ 1 ][ 5 ][ 4 ][ 2 ][ 8 ]
Compare 5 and 4 → swap
[ 1 ][ 4 ][ 5 ][ 2 ][ 8 ]
Compare 5 and 2 → swap
[ 1 ][ 4 ][ 2 ][ 5 ][ 8 ]
Compare 5 and 8 → no swap
[ 1 ][ 4 ][ 2 ][ 5 ][ 8 ]
âś” Largest element 8 has bubbled to the end.
Pass 2
Now we only need to check the first 4 elements.
Start: [ 1 ][ 4 ][ 2 ][ 5 ] [8]
Compare 1 and 4 → no swap
[ 1 ][ 4 ][ 2 ][ 5 ] [8]
Compare 4 and 2 → swap
[ 1 ][ 2 ][ 4 ][ 5 ] [8]
Compare 4 and 5 → no swap
[ 1 ][ 2 ][ 4 ][ 5 ] [8]
âś” Second largest element 5 is now in place.
Pass 3
Check only the first 3 elements.
Start: [ 1 ][ 2 ][ 4 ] [5][8]
Compare 1 and 2 → no swap
[ 1 ][ 2 ][ 4 ] [5][8]
Compare 2 and 4 → no swap
[ 1 ][ 2 ][ 4 ] [5][8]
âś” Sorted order is now reached.
Final Result
[ 1 ][ 2 ][ 4 ][ 5 ][ 8 ]
Visual Illustration of Bubble Effect
Here’s how the largest values "bubble up" to the right after each pass:
Pass 1: [ 5 1 4 2 8 ] → [ 1 4 2 5 8 ]
Pass 2: [ 1 4 2 5 ] → [ 1 2 4 5 ] [8]
Pass 3: [ 1 2 4 ] → [ 1 2 4 ] [5 8]
Sorted! âś…
Optimizations
This “early exit” is bubble sort’s one redeeming trick: when data is already nearly sorted, it doesn’t keep doing pointless work. That idea, detecting when you can stop early, shows up in far better algorithms too.
Stability
Bubble sort is stable.
Complexity
| Case | Time Complexity | Notes |
| Worst Case | $O(n^2)$ | Array in reverse order |
| Average Case | $O(n^2)$ | Typically quadratic comparisons |
| Best Case | $O(n)$ | Already sorted + early exit optimization |
| Space | $O(1)$ | In-place, requires no extra memory |
Implementation
Selection sort is another simple sorting algorithm, often introduced right after bubble sort because it is equally easy to understand.
Instead of repeatedly "bubbling" elements, selection sort works by repeatedly selecting the smallest (or largest) element from the unsorted portion of the array and placing it into its correct position.
Think of it like arranging books:
Selection sort is a nice contrast to bubble sort: bubble sort makes lots of small swaps, while selection sort makes very few swaps but still pays for a full scan each pass. That’s a useful engineering lesson: minimizing swaps doesn’t automatically mean you’re minimizing total work.
Step-by-Step Walkthrough
Example Run
We will sort the array:
[ 64 ][ 25 ][ 12 ][ 22 ][ 11 ]
Pass 1
Find the smallest element in the entire array and put it in the first position.
Initial: [ 64 ][ 25 ][ 12 ][ 22 ][ 11 ]
Smallest = 11
Swap 64 ↔ 11
Result: [ 11 ][ 25 ][ 12 ][ 22 ][ 64 ]
âś” The first element is now in its correct place.
Pass 2
Find the smallest element in the remaining unsorted region.
Start: [ 11 ][ 25 ][ 12 ][ 22 ][ 64 ]
Smallest in [25,12,22,64] = 12
Swap 25 ↔ 12
Result: [ 11 ][ 12 ][ 25 ][ 22 ][ 64 ]
âś” The second element is now in place.
Pass 3
Repeat for the next unsorted region.
Start: [ 11 ][ 12 ][ 25 ][ 22 ][ 64 ]
Smallest in [25,22,64] = 22
Swap 25 ↔ 22
Result: [ 11 ][ 12 ][ 22 ][ 25 ][ 64 ]
âś” The third element is now in place.
Pass 4
Finally, sort the last two.
Start: [ 11 ][ 12 ][ 22 ][ 25 ][ 64 ]
Smallest in [25,64] = 25
Already in correct place → no swap
Result: [ 11 ][ 12 ][ 22 ][ 25 ][ 64 ]
âś” Array fully sorted.
Final Result
[ 11 ][ 12 ][ 22 ][ 25 ][ 64 ]
Visual Illustration of Selection
Here’s how the sorted region expands from left to right:
Pass 1: [ 64 25 12 22 11 ] → [ 11 ] [ 25 12 22 64 ]
Pass 2: [ 11 ][ 25 12 22 64 ] → [ 11 12 ] [ 25 22 64 ]
Pass 3: [ 11 12 ][ 25 22 64 ] → [ 11 12 22 ] [ 25 64 ]
Pass 4: [ 11 12 22 ][ 25 64 ] → [ 11 12 22 25 ] [ 64 ]
At each step:
Optimizations
This is why selection sort sometimes appears in environments where writes are expensive (think: limited-write memory). Even there, it’s a trade: fewer writes, but still lots of comparisons.
Stability
Complexity
| Case | Time Complexity | Notes |
| Worst Case | $O(n^2)$ | Scanning full unsorted region every pass |
| Average Case | $O(n^2)$ | Quadratic comparisons |
| Best Case | $O(n^2)$ | No improvement, still must scan every pass |
| Space | $O(1)$ | In-place sorting |
Implementation
Insertion sort is a simple, intuitive sorting algorithm that works the way people often sort playing cards in their hands.
It builds the sorted portion one element at a time, by repeatedly taking the next element from the unsorted portion and inserting it into its correct position among the already sorted elements.
Insertion sort is the first “small” sort that feels genuinely useful in practice. Why? Because many real datasets are already almost sorted, and insertion sort loves that. It’s also a common helper inside faster algorithms: when the subproblems become tiny, insertion sort finishes them efficiently with low overhead.
The basic idea:
Example Run
We will sort the array:
[ 12 ][ 11 ][ 13 ][ 5 ][ 6 ]
Pass 1: Insert 11
Compare 11 with 12 → shift 12 right → insert 11 before it.
Before: [ 12 ][ 11 ][ 13 ][ 5 ][ 6 ]
Action: Insert 11 before 12
After: [ 11 ][ 12 ][ 13 ][ 5 ][ 6 ]
âś” Sorted portion: $[11, 12]$
Pass 2: Insert 13
Compare 13 with 12 → already greater → stays in place.
Before: [ 11 ][ 12 ][ 13 ][ 5 ][ 6 ]
After: [ 11 ][ 12 ][ 13 ][ 5 ][ 6 ]
âś” Sorted portion: [11, 12, 13]
Pass 3: Insert 5
Compare 5 with 13 → shift 13 Compare 5 with 12 → shift 12 Compare 5 with 11 → shift 11 Insert 5 at start.
Before: [ 11 ][ 12 ][ 13 ][ 5 ][ 6 ]
Action: Move 13 → Move 12 → Move 11 → Insert 5
After: [ 5 ][ 11 ][ 12 ][ 13 ][ 6 ]
âś” Sorted portion: [5, 11, 12, 13]
Pass 4: Insert 6
Compare 6 with 13 → shift 13 Compare 6 with 12 → shift 12 Compare 6 with 11 → shift 11 Insert 6 after 5.
Before: [ 5 ][ 11 ][ 12 ][ 13 ][ 6 ]
Action: Move 13 → Move 12 → Move 11 → Insert 6
After: [ 5 ][ 6 ][ 11 ][ 12 ][ 13 ]
âś” Sorted!
Final Result
[ 5 ][ 6 ][ 11 ][ 12 ][ 13 ]
Visual Growth of Sorted Region
Start: [ 12 | 11 13 5 6 ]
Pass 1: [ 11 12 | 13 5 6 ]
Pass 2: [ 11 12 13 | 5 6 ]
Pass 3: [ 5 11 12 13 | 6 ]
Pass 4: [ 5 6 11 12 13 ]
âś” The bar ( | ) shows the boundary between sorted and unsorted.
Optimizations
That last bullet is a subtle but important point: binary search reduces comparisons, not movement. If moving elements is the expensive part (and it often is), the big win comes from having fewer shifts, which happens naturally when the array is already nearly sorted.
Stability
Insertion sort is stable (equal elements keep their relative order).
Complexity
| Case | Time Complexity | Notes |
| Worst Case | $O(n^2)$ | Reverse-sorted input |
| Average Case | $O(n^2)$ | |
| Best Case | $O(n)$ | Already sorted input , only comparisons, no shifts |
| Space | $O(1)$ | In-place |
Implementation
Quick Sort is a divide-and-conquer algorithm. Unlike bubble sort or selection sort, which work by repeatedly scanning the whole array, Quick Sort works by partitioning the array into smaller sections around a "pivot" element and then sorting those sections independently.
It is one of the fastest sorting algorithms in practice, widely used in libraries and systems.
Quick sort’s magic is that it does a lot of work once up front (partitioning), and that one operation pays off by shrinking the problem dramatically. The danger is also in that same choice: if your pivots are consistently bad, you stop shrinking the problem effectively and performance collapses.
The basic idea:
Rearrange (partition) the array so that:
All elements smaller than the pivot come before it.
All elements larger than the pivot come after it.
The pivot is now in its final sorted position.
Example Run
We will sort the array:
[ 10 ][ 80 ][ 30 ][ 90 ][ 40 ][ 50 ][ 70 ]
Step 1: Choose Pivot (last element = 70)
Partition around 70.
Initial: [ 10 ][ 80 ][ 30 ][ 90 ][ 40 ][ 50 ][ 70 ]
→ Elements < 70: [ 10, 30, 40, 50 ]
→ Pivot (70) goes here ↓
Sorted split: [ 10 ][ 30 ][ 40 ][ 50 ][ 70 ][ 90 ][ 80 ]
(ordering of right side may vary during partition; only pivot’s position is guaranteed)
âś” Pivot (70) is in correct place.
Step 2: Left Subarray [10, 30, 40, 50]
Choose pivot = 50.
[ 10 ][ 30 ][ 40 ][ 50 ] → pivot = 50
→ Elements < 50: [10, 30, 40]
→ Pivot at correct place
Result: [ 10 ][ 30 ][ 40 ][ 50 ]
âś” Pivot (50) fixed.
Step 3: Left Subarray of Left [10, 30, 40]
Choose pivot = 40.
[ 10 ][ 30 ][ 40 ] → pivot = 40
→ Elements < 40: [10, 30]
→ Pivot at correct place
Result: [ 10 ][ 30 ][ 40 ]
âś” Pivot (40) fixed.
Step 4: [10, 30]
Choose pivot = 30.
[ 10 ][ 30 ] → pivot = 30
→ Elements < 30: [10]
Result: [ 10 ][ 30 ]
âś” Sorted.
Final Result
[ 10 ][ 30 ][ 40 ][ 50 ][ 70 ][ 80 ][ 90 ]
Visual Partition Illustration
Here’s how the array gets partitioned step by step:
Pass 1: [ 10 80 30 90 40 50 | 70 ]
↓ pivot = 70
[ 10 30 40 50 | 70 | 90 80 ]
Pass 2: [ 10 30 40 | 50 ] [70] [90 80]
↓ pivot = 50
[ 10 30 40 | 50 ] [70] [90 80]
Pass 3: [ 10 30 | 40 ] [50] [70] [90 80]
↓ pivot = 40
[ 10 30 | 40 ] [50] [70] [90 80]
Pass 4: [ 10 | 30 ] [40] [50] [70] [90 80]
↓ pivot = 30
[ 10 | 30 ] [40] [50] [70] [90 80]
âś” Each pivot splits the problem smaller and smaller until fully sorted.
Optimizations
These are not “nice-to-haves”, they’re what separates textbook quick sort from industrial quick sort. Real implementations work hard to avoid worst-case behavior, because worst-case quick sort can be painfully slow on already-sorted or adversarial inputs.
Stability
Complexity
| Case | Time Complexity | Notes |
| Worst Case | $O(n^2)$ | Poor pivot choices (e.g., always smallest/largest in sorted array) |
| Average Case | $O(n \log n)$ | Expected performance, very fast in practice |
| Best Case | $O(n \log n)$ | Balanced partitions |
| Space | $O(\log n)$ | Due to recursion stack |
Implementation
Heap Sort is a comparison-based sorting algorithm that uses a special data structure called a binary heap. It is efficient, with guaranteed $O(n \log n)$ performance, and sorts in-place (no extra array needed).
Heap sort is what you reach for when you want predictability. It doesn’t have quick sort’s “usually blazing fast, occasionally terrible” personality. It’s consistently good, and that reliability matters in systems where worst-case latency is a big deal.
The basic idea:
Build a max heap from the input array.
In a max heap, every parent is greater than its children.
This ensures the largest element is at the root (first index).
Swap the root (largest element) with the last element of the heap.
Example Run
We will sort the array:
[ 4 ][ 10 ][ 3 ][ 5 ][ 1 ]
Step 1: Build Max Heap
Binary tree view:
Heap:
4
/ \
10 3
/ \
5 1
Heapify → Largest at top:
Heap:
10
/ \
5 3
/ \
4 1
Array: [ 10 ][ 5 ][ 3 ][ 4 ][ 1 ]
Step 2: Swap Root with Last
Swap 10 ↔ 1 → largest (10) moves to correct final place.
[ 1 ][ 5 ][ 3 ][ 4 ][ 10 ]
Heapify the reduced heap [1,5,3,4]:
Heap:
5
/ \
4 3
/
1
Array: [ 5 ][ 4 ][ 3 ][ 1 ][ 10 ]
Step 3: Swap Root with Last
Swap 5 ↔ 1.
[ 1 ][ 4 ][ 3 ][ 5 ][ 10 ]
Heapify reduced heap [1,4,3]:
Heap:
4
/ \
1 3
Array: [ 4 ][ 1 ][ 3 ][ 5 ][ 10 ]
Step 4: Swap Root with Last
Swap 4 ↔ 3.
[ 3 ][ 1 ][ 4 ][ 5 ][ 10 ]
Heapify reduced heap [3,1]:
Heap:
3
/
1
Array: [ 3 ][ 1 ][ 4 ][ 5 ][ 10 ]
Step 5: Swap Root with Last
Swap 3 ↔ 1.
[ 1 ][ 3 ][ 4 ][ 5 ][ 10 ]
âś” Sorted array achieved.
Final Result
[ 1 ][ 3 ][ 4 ][ 5 ][ 10 ]
Visual Progress
Initial: [ 4 10 3 5 1 ]
Heapify: [ 10 5 3 4 1 ]
Step 1: [ 5 4 3 1 | 10 ]
Step 2: [ 4 1 3 | 5 10 ]
Step 3: [ 3 1 | 4 5 10 ]
Step 4: [ 1 | 3 4 5 10 ]
Sorted: [ 1 3 4 5 10 ]
âś” Each step places the largest element into its correct final position.
Optimizations
Stability
Heap sort is not stable. Equal elements may not preserve their original order because of swaps.
Complexity
| Case | Time Complexity | Notes |
| Worst Case | $O(n \log n)$ | |
| Average Case | $O(n \log n)$ | |
| Best Case | $O(n \log n)$ | No early exit possible |
| Space | $O(1)$ | In-place |
Implementation
Radix Sort is a non-comparison-based sorting algorithm. Instead of comparing elements directly, it processes numbers digit by digit, from either the least significant digit (LSD) or the most significant digit (MSD), using a stable intermediate sorting algorithm (commonly Counting Sort).
Radix sort is the “cheat code” you unlock when your values have structure (digits, bytes, fixed-length keys). Comparisons are expensive because they only tell you “less than or greater than.” Radix sort uses richer information: digits let you bucket items directly. That’s how it reaches near-linear performance for fixed-width integers.
Because it avoids comparisons, Radix Sort can achieve linear time complexity in many cases.
The basic idea:
Example Run (LSD Radix Sort)
We will sort the array:
[ 170 ][ 45 ][ 75 ][ 90 ][ 802 ][ 24 ][ 2 ][ 66 ]
Step 1: Sort by 1s place (units digit)
Original: [170, 45, 75, 90, 802, 24, 2, 66]
By 1s digit:
[170][90] (0)
[802][2] (2)
[24] (4)
[45][75] (5)
[66] (6)
Result: [170][90][802][2][24][45][75][66]
Step 2: Sort by 10s place
[170][90][802][2][24][45][75][66]
By 10s digit:
[802][2] (0)
[24] (2)
[45] (4)
[66] (6)
[170][75] (7)
[90] (9)
Result: [802][2][24][45][66][170][75][90]
Step 3: Sort by 100s place
[802][2][24][45][66][170][75][90]
By 100s digit:
[2][24][45][66][75][90] (0)
[170] (1)
[802] (8)
Result: [2][24][45][66][75][90][170][802]
Final Result
[ 2 ][ 24 ][ 45 ][ 66 ][ 75 ][ 90 ][ 170 ][ 802 ]
Visual Process
Step 1 (1s): [170 90 802 2 24 45 75 66]
Step 2 (10s): [802 2 24 45 66 170 75 90]
Step 3 (100s): [2 24 45 66 75 90 170 802]
✔ Each pass groups by digit → final sorted order.
LSD vs MSD
Stability
Complexity
Time Complexity: $O(n \cdot k)$
$n$ = number of elements
$k$ = number of digits (or max digit length)
Space Complexity: $O(n + k)$ (depends on the stable sorting method used, e.g., Counting Sort).
For integers with fixed number of digits, Radix Sort can be considered linear time.
Implementation
Counting Sort is a non-comparison-based sorting algorithm that works by counting occurrences of each distinct element and then calculating their positions in the output array.
It is especially efficient when:
Counting sort is the engine that often makes radix sort practical. It’s incredibly fast when the value range is reasonable, but it’s not a general-purpose sort: it relies on keys being integers in a known range. When it fits, it feels like you’re sorting by flipping a few switches rather than doing a lot of comparisons.
The basic idea:
Modify the count array to store prefix sums (cumulative counts).
This gives the final position of each element.
Place elements into the output array in order, using the count array.
Example Run
We will sort the array:
[ 4 ][ 2 ][ 2 ][ 8 ][ 3 ][ 3 ][ 1 ]
Step 1: Count Frequencies
Elements: 1 2 3 4 5 6 7 8
Counts: 1 2 2 1 0 0 0 1
Step 2: Prefix Sums
Elements: 1 2 3 4 5 6 7 8
Counts: 1 3 5 6 6 6 6 7
âś” Now each number tells us the last index position where that value should go.
Step 3: Place Elements
Process input from right → left (for stability).
Input: [4,2,2,8,3,3,1]
Place 1 → index 0
Place 3 → index 4
Place 3 → index 3
Place 8 → index 6
Place 2 → index 2
Place 2 → index 1
Place 4 → index 5
Final Result
[ 1 ][ 2 ][ 2 ][ 3 ][ 3 ][ 4 ][ 8 ]
Visual Process
Step 1 Count: [0,1,2,2,1,0,0,0,1]
Step 2 Prefix: [0,1,3,5,6,6,6,6,7]
Step 3 Output: [1,2,2,3,3,4,8]
âś” Linear-time sorting by counting positions.
Stability
Counting Sort is stable if we place elements from right to left into the output array.
Complexity
| Case | Time Complexity | Notes |
| Overall | $O(n + k)$ | $n$ = number of elements, $k$ = value range |
| Space | $O(n + k)$ | Extra array for counts + output |
Implementation
Below is a consolidated side-by-side comparison of all the sorts we’ve covered so far:
Before using any table like this, remember what it can and can’t tell you. Big-O is a great compass, but not a GPS: constant factors, memory behavior, stability requirements, and input shape (random vs nearly sorted vs adversarial) all affect real-world speed. The most “correct” choice is the one that matches your constraints, not the one with the prettiest asymptotic line.
| Algorithm | Best Case | Average | Worst Case | Space | Stable? | Notes |
| Bubble Sort | $O(n)$ | $O(n^2)$ | $O(n^2)$ | $O(1)$ | Yes | Simple, slow |
| Selection Sort | $O(n^2)$ | $O(n^2)$ | $O(n^2)$ | $O(1)$ | No | Few swaps |
| Insertion Sort | $O(n)$ | $O(n^2)$ | $O(n^2)$ | $O(1)$ | Yes | Good for small inputs |
| Quick Sort | $O(n \log n)$ | $O(n \log n)$ | $O(n^2)$ | $O(\log n)$ | No | Very fast in practice |
| Heap Sort | $O(n \log n)$ | $O(n \log n)$ | $O(n \log n)$ | $O(1)$ | No | Guaranteed performance |
| Counting Sort | $O(n + k)$ | $O(n + k)$ | $O(n + k)$ | $O(n + k)$ | Yes | Integers only |
| Radix Sort | $O(nk)$ | $O(nk)$ | $O(nk)$ | $O(n + k)$ | Yes | Uses Counting Sort |
Use insertion sort for tiny or nearly-sorted arrays, quick sort for general high speed (with good pivot strategy), heap sort for worst-case guarantees, and counting/radix when your keys make it possible.