Sunday 22 February 2026, 08:23 PM
A gentle introduction to algorithms
Algorithms are clear steps to solve problems fast. With good data structures, Big-O sense, search/sort, recursion, and trade-offs, you think algorithmically.
What is an algorithm?
At its core, an algorithm is just a set of clear, repeatable steps to solve a problem. If you’ve ever followed a recipe, sorted your bookshelf, or picked the shortest line at the grocery store, you’ve used algorithms. In computing, we turn those steps into instructions a computer can follow.
That’s it. No secret handshake. No mystical math. Just steps.
When people talk about algorithms sounding fancy or intimidating, it often boils down to two questions:
- What are the steps?
- How fast (or efficient) are they?
If you can answer those, you’re already thinking like an algorithm designer.
Why algorithms matter in everyday life
You probably care about your phone’s battery, your app’s load time, or getting directions that avoid traffic. Algorithms make all of that happen efficiently. They help:
- Your email app filter spam without missing your friend’s note.
- Your music app find songs like the ones you love.
- Your GPS find the quickest route home.
- Your photos app group faces without you labeling each one.
Even if you’re not writing code every day, understanding algorithms helps you think about problems in a structured, calm way. That skill is universally useful.
Ingredients: data structures at a glance
Algorithms and data structures are best friends. Data structures hold your stuff; algorithms decide how to use it.
Here are a few:
- Array or list: An ordered lineup of items. Great for scanning from start to finish.
- Hash table or dictionary: A set of key-value pairs. Great for quick lookups by a key (like a username).
- Stack: A “last in, first out” structure. Like a stack of plates: you take the top one first.
- Queue: A “first in, first out” line. Like waiting to order coffee.
- Tree: A branching structure, like folders within folders.
- Graph: A web of connections. Think of cities (nodes) and roads (edges).
Picking the right structure can turn a slow solution into a fast one, even before you write any clever code.
Measuring efficiency without the math
We care about two things: time and space. Time is how long the algorithm takes; space is how much memory it uses.
You’ll often hear “Big O” notation—like O(n) or O(log n). It’s a simple way to compare how algorithms scale as input grows.
- O(1): Constant time. No matter how much data you have, it’s about the same time (like checking the first item in a list).
- O(n): Linear time. If you double the data, you double the work (like scanning a list).
- O(log n): Logarithmic time. Grows slowly as data grows (like halving a search space again and again).
- O(n log n): Very common for efficient sorting.
- O(n^2): Quadratic time. Often means nested loops. Fine for small data, slow for large.
You don’t need to memorize formulas. Just know the shape of growth:
- Linear is usually fine.
- Logarithmic is excellent.
- Quadratic gets painful at scale.
A friendly tour of searching
Imagine you’re looking for a contact named “Zoey” in your phone.
- Linear search: Scroll from the top to the bottom until you find Zoey. Simple. Works on any list. Time: O(n).
- Binary search: Jump to the middle. If the name you want comes earlier in the alphabet, ignore the second half. Repeat on the first half. Time: O(log n). But this only works if the list is sorted.
Here’s what that might look like in code. We’ll use Python for readability.
Linear search:
def linear_search(arr, target):
for i, item in enumerate(arr):
if item == target:
return i
return -1
Binary search (assuming the array is sorted):
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
When does binary search win? With large, sorted lists. If your data isn’t sorted or changes constantly, linear search might be simpler and good enough. Real-world trade-offs matter.
Sorting made simple
Sorting is a foundation of so many tasks—like faster searching, deduping, and grouping. There are many sorting algorithms, but here are the vibes:
- Bubble sort: Compares neighboring items and swaps if out of order. Simple to explain, slow in practice (O(n^2)).
- Insertion sort: Builds a sorted list one item at a time by inserting into the right place. Great for small or nearly sorted data (still O(n^2) worst case but often snappy for small n).
- Merge sort: Splits the list, sorts the halves, and merges them. Reliable O(n log n).
- Quick sort: Picks a pivot and partitions items around it. Usually O(n log n), but can be O(n^2) in worst cases.
- Timsort: A hybrid used in Python’s sort. Optimized for real-world patterns like partially sorted data.
If you’re doing everyday coding, you’ll usually call the language’s built-in sort. But it’s still good to know the trade-offs.
Here’s insertion sort, just to see the mechanics:
def insertion_sort(arr):
arr = arr[:] # make a copy if you want to keep the original
for i in range(1, len(arr)):
key = arr[i]
j = i - 1
# Move elements greater than key one position ahead
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
return arr
Why show this? Because it’s intuitive: pick up a card and insert it into the right spot among the cards in your hand. If your data is small or almost sorted, this simple approach can be perfectly fine.
Recursion without fear
Recursion is when a function calls itself to break a big problem into smaller ones. It sounds fancy, but you already think recursively. When you clean a messy room, you may organize by areas: desk, closet, shelves. Each area is a smaller version of “tidy this space.” That’s recursion.
A classic example is computing factorials (n! = n × (n-1) × ... × 1):
def factorial(n):
if n <= 1:
return 1 # base case
return n * factorial(n - 1) # recursive case
Key ideas:
- Base case: the simplest version the function can solve directly.
- Progress: each call works on a smaller problem that moves toward the base case.
When to use recursion:
- When the problem naturally breaks into similar subproblems (like trees or nested structures).
- When it keeps the code simple and readable.
When to be careful:
- Deep recursion can use a lot of memory (each call adds a “frame” on the call stack).
- Sometimes an iterative approach (using a loop) is safer for very deep problems.
Greedy, divide-and-conquer, and dynamic programming made approachable
These are problem-solving styles you’ll see again and again.
-
Greedy: Make the best local choice now and hope it leads to a global optimum. Example: picking the fewest coins to make change by always taking the largest possible coin. Works well for many problems, but not all.
-
Divide-and-conquer: Split the problem into parts, solve each, and combine them. Merge sort is the poster child: split list, sort halves, merge.
-
Dynamic programming: When a problem overlaps with itself (same subproblems appear repeatedly), remember solutions so you don’t redo work. This is like writing down recipes so you don’t keep reinventing dinner.
A tiny dynamic programming example: Fibonacci numbers. Naive recursion recalculates the same values again and again. Memoization remembers results.
def fib(n, memo=None):
if memo is None:
memo = {}
if n <= 1:
return n
if n not in memo:
memo[n] = fib(n - 1, memo) + fib(n - 2, memo)
return memo[n]
The key mental trick: if you find yourself solving the same smaller problem multiple times, store the result and reuse it.
Algorithms in the real world
Let’s ground this with familiar examples.
-
Maps and navigation: Shortest path algorithms (like Dijkstra’s) handle finding the quickest route by treating intersections as nodes and roads as edges with travel costs.
-
Recommendations: Collaborative filtering and ranking algorithms find patterns in what people like and surface similar content.
-
Search engines: Inverted indexes let systems quickly find documents containing your keywords. Ranking algorithms decide which results you see first.
-
Compression: Algorithms shrink files and images by spotting patterns (like repeated bytes or colors) and representing them more efficiently.
-
Cryptography: Secure messaging relies on math-based algorithms that are easy to compute one way but hard to reverse without a key.
-
Scheduling: From delivery trucks to cloud computing, algorithms decide who gets what resource when, to minimize waiting and maximize throughput.
-
Gaming and simulations: Pathfinding, decision trees, and heuristics help characters move and act intelligently.
You don’t have to master each algorithm to appreciate them. Knowing they exist helps you ask better questions and pick the right tool.
How to think algorithmically
You don’t become “algorithmic” by memorizing names. You get there by practicing how to break problems down. Try this flow:
- Restate the problem in plain words.
- Define inputs and outputs clearly.
- Think of a brute-force approach first. It’s okay if it’s slow—get something working.
- Look for structure:
- Is the data sorted?
- Are there repeated subproblems?
- Can I split it into parts?
- Can I preprocess to make queries faster?
- Choose a data structure that makes common operations cheap.
- Sketch the steps with pseudocode or a simple story.
- Implement, test on tiny cases, then scale up.
A small example: You’re given a list of numbers, and you need to answer “how many are greater than X?” many times.
- Brute force: For each query X, scan the list and count. O(n) per query.
- Better: Sort the list once, then use binary search to find the first number greater than X. Each query becomes O(log n). You paid O(n log n) once to sort, then queries are fast.
This kind of trade-off—preprocess once, answer fast later—is a recurring theme.
Pseudocode is your friend
Before writing code in a language, outline the logic. Pseudocode is informal and focuses on steps, not syntax. For example, binary search in pseudocode:
- Set left to 0 and right to length - 1
- While left <= right:
- mid = floor((left + right) / 2)
- If arr[mid] == target: return mid
- Else if arr[mid] < target: left = mid + 1
- Else: right = mid - 1
- Return not found
Writing like this keeps you from getting stuck on semicolons and parentheses when what you really need is a clear plan.
Common mistakes and how to avoid them
- Skipping the simple solution: Jumping to a fancy approach before you have anything working. Start simple; you can always optimize.
- Ignoring edge cases: Empty lists, duplicates, negative numbers, extremely large values. Ask: what could break this?
- Misusing data structures: Using a list for frequent lookups by key will be slow; a dictionary might be better.
- Over-optimizing early: Profile before polishing. Often the bottleneck isn’t where you think it is.
- Confusing readability with cleverness: Clear, boring code is good code. Future-you will thank present-you.
- Forgetting invariants: An invariant is something that remains true throughout the algorithm (like “left side is always sorted”). Stating it explicitly helps prevent bugs.
Practicing without being overwhelmed
You don’t need a thousand problems to “get” algorithms. A small, well-chosen set of exercises can build real intuition:
- Searching: Implement linear and binary search. Try them on different inputs and sizes.
- Sorting: Implement insertion sort, then use your language’s built-in sort and compare.
- Hashing: Use a dictionary to count word frequencies in a paragraph.
- Stacks and queues: Check balanced parentheses using a stack.
- Trees: Write a function to compute the height of a tree recursively.
- Graphs: Represent a small map as a graph and do a breadth-first search to find shortest path in unweighted edges.
As you practice, narrate your thinking. “I need to quickly check if I’ve seen this before. That sounds like a set.” Naming the structure helps you choose the algorithm.
When “fast” is fast enough
Speed is contextual. If you’re sorting 50 numbers once, any reasonable method will be fine. If you’re sorting 50 million numbers on a server that does it every minute, even small improvements matter.
Ask:
- How big is the data?
- How often will this run?
- Is the data changing frequently?
- Can I preprocess once and reuse the result?
- What does “fast enough” mean for this problem?
This mindset keeps you focused on practical improvements rather than abstract perfection.
Reading code is learning too
Try reading reference implementations of simple algorithms. You’ll see patterns:
- Clear variable names: left, right, mid, count, seen
- Early exits: return quickly when a condition is met
- Small helper functions: isolate logic so each piece is easy to test
Then re-implement from memory. You’ll be surprised how much you internalize by doing both.
A few micro patterns to keep in your toolkit
- Two pointers: Keep two indices moving through a list to find pairs or maintain a window. Great for subarray sums, finding duplicates in sorted arrays, or merging.
- Sliding window: Maintain a window of elements and move it along, updating counts as you go. Great for “longest substring without repeats” or moving averages.
- Precompute and reuse: Prefix sums let you compute the sum of any range quickly after an O(n) setup step.
- Bucketing: Group values into buckets to simplify comparisons (e.g., counting sort for small integer ranges).
These are like kitchen techniques: once you know them, recipes get easier.
Bringing it all together
Algorithms aren’t about being the smartest person in the room. They’re about having a calm, step-by-step way to tackle problems. You:
- Define the problem clearly.
- Choose structures that fit your access patterns.
- Start with a brute-force plan.
- Look for patterns to optimize (sorted data, overlapping subproblems, natural splits).
- Test with tiny examples, then scale up.
- Favor clear, readable code over clever tricks.
Over time, you’ll spot the same shapes in different problems, and your toolbox will grow naturally. That’s the gentle truth of algorithms: they’re learnable, they’re useful, and they’re already closer to how you think than you might realize.
So the next time “algorithms” comes up, imagine a good recipe card: a clear list of steps, just the right tools, and a dash of practical wisdom. You’re more ready than you think.