Algorithmic bathwater
The belief that algorithmic tech interviews do not test for realworld programming skills is a memetic hazard. There might be truth to it but it serves as displacement from what is often the real underlying reason for resistance: the fear of negative judgement.
Without passing judgement on whether these tests are a good way to assess skill level, it’s crucial to realise that dismissing them has drawbacks. These problems often contain different ways of thinking and require novel techniques or hardwon algorithms, and their deliberate practice can be a valuable way to improve your problemsolving skills. While uncomfortable, acknowledging this offers us an opportunity to grow.
Not going in blind
Mastering a few fundamental problems can make you more effective at a wide range of challenges, but how do you go about truly understanding these ‘eigenproblems’?
It’s commonly heard that if you ever need an algorithm you’ll “just Google it”, however this belief is quite suspect as you’re unlikely to reach for Google without awareness of the techniques that might be applicable. Having a foundational understanding can help you to recognise opportunities for application in the first place.
The creator of the “Blind 75” problem set, an engineer from Meta, attempted to distill the most useful questions for learning the core concepts and techniques for each category/type of problems from their experience doing 400500 LeetCode questions during their last job hunt.
75 questions is still a very large time commitment, and the reality is that it’s even more timeconsuming than it might look as fluency generally requires spaced repetition of the problems by doing a variety of topics each week and revisiting these again in subsequent weeks.
We want the ability to be able to identify and apply the right techniques given an unknown problem, and this is about grasping the intent, design, and the decisionmaking involved in this. This ability can certainly be picked up incidentally by doing lots of problems and eventually beginning to patternmatch them, but I’d argue that this isn’t the most efficient way to learn as a lot of the knowledge doesn’t appear to be naturally tacit.
Current methods of teaching often focus on mere implementation, overlooking the decisionmaking skills needed to identify the right techniques for a problem. This article aims to fill that gap, by articulating what is often left unsaid. While it won’t replace handson experience in the nuances of implementation, it should be helpful in improving your understanding of technique ‘applicability’.
Note
 We’re intentionally not discussing the implementation details of these techniques, as this is already wellcovered elsewhere. For deep explanations, video tutorials and code examples, we recommend NeetCode, Grind 75 and LeetCode or otherwise searching Google.
 This article is a work in progress and I’ll be adding more to it over time. Please email me with any insights or suggestions you have for improvement.
What kind of problem do I have?

Is the problem related to a linear/sequential data structure (e.g. array, linked list, string, search space)?

Are we being asked to find pairs, triplets or subarrays that match a constraint?

To solve this optimally (e.g. $O(n)$), we need to be able to easily compare elements in the sequence without using nested loops. We have at most two options:

Ensuring the sequence is sorted and then applying the Two Pointers technique:

We can avoid the need for a nested loop if the sequence is sorted, as we can operate upon the sequence using two explicit pointers, one at the start and one at the end. The pointers clarify which elements are being considered for the condition being evaluated, and on each iteration we choose how to move these pointers based on the result of a comparison.

This is particularly useful when a deterministic order matters or when we are looking for unique pairs/triplets.


Using a Hash Map:
 This allows us to store seen elements and to then look up the complement of the current element in the hash table. It’d be a more appropriate choice when the sequence is unsorted and sorting it would break problem constraints, or when we need to remember past elements for comparison but don’t care about their order.



Are we being asked to find a longest/shortest substring or subsequence that matches a particular constraint or to compute aggregate statistics for a particular length subsequence?

To solve this optimally, we want to be able to reuse the computations from previous windows so that we can update our answer for each window in constant time (e.g. $O(1)$) and the whole sequence can be completed in linear time (e.g. $O(n)$).

We can do this by applying the Sliding Window technique, either using a fixed window size or a dynamic window size that can expand, shrink or reset based on certain conditions. Please note that these are not implemented in the same way as Two Pointers problems as explicit pointers towards the start and end of the window, but instead they generally use a single loop iterator and some additional variables to keep track of the window’s characteristics. This is a subtle but important distinction from the twopointer technique.


Are we being asked to find a target value or minimum/maximum value in this sequence?

If the sequence is unsorted, there is no way to do better than a linear scan of the sequence (e.g. $O(n)$).

But, if the sequence is sorted we can apply the Binary Search technique.

In fact, we can apply the Binary Search technique if the sequence is partially sorted as long as it is partitionable (e.g. we can discern a sorted and unsorted partition on each iteration). For example, we can apply binary search to rotated sorted arrays or bitonic sequences.

We can also apply the Binary Search technique to compute numerical approximations if a sequence is ordered but virtual (e.g. a search space). For example, to approximate the square root of a number.


Are we being asked to detect cycles in a linked list or to find the middle element?

Use the FastAndSlow Pointers technique. If the pointers meet, there is a cycle.

If we need to find the middle element, we can rely on the fact that the slow pointer will be at the middle element when the fast pointer reaches the end of the list.


Are we being asked to find duplicates in a sequence?

If the sequence is sorted, then duplicate items will be next to each other and we can find them by comparing adjacent elements in the sequence.

If the sequence is unsorted, we can add elements that we see into a Set or Hash Map and then check whether the current element is within this.


Does the problem require frequent and ordered access to recently processed elements?

In situations in which you need access to elements in some dynamic order you cannot just use a
for
loop to iterate through some elements in a fixed order, and likely will need to append elements into a dynamically ordered sequence maintained within a Stack or Queue. 
If we need to access the most recently processed elements we can use a Stack. This has $O(1)$ access at the end that elements are appended.

Note that ‘stack’ is a bit of an overloaded term as it refers to any lastin firstout (LIFO) structures. They are often arrays (or linked lists) but this isn’t necessarily the case. For example, when executing programs we keep a stack of frames within a block of contiguous memory.

I think it can also be helpful to think of how other data structures can also have stacklike access patterns. For example, when given a decimal integer we can ‘peek’ at its last digit using
% 10
and then ‘pop’ this using a// 10
.


If we need to access the least recently processed elements we can use a Queue. This has $O(1)$ access at its start, at the opposite end to where elements are appended.


Are we being asked to merge sorted lists?
 Use a MinHeap.


Does the problem involve categorizing or identifying unique elements based on some aspect of them not immediately known?
 These are in fact conceptually Hash Map problems (because you cannot use Sets for identifying “uniqueness” when it is not defined by the value of the element itself). However, the crux of these questions often lies not in the choice of data structure, but the creation of a hashlike bucketing function that can efficiently capture/compute some invariant of the element that it can be grouped by. For example, anagrams can be grouped by the frequency of their characters, snowflakes by their lexicographically smallest rotation, and so on.

Does the problem require you to perform efficient lookups of data?

Do we need to quickly store or lookup strings or provide autocomplete functionality?
 Use a Trie data structure (also known as a prefix tree). This is a tree data structure that stores strings in a way that allows for fast (e.g. $O(k)$) retrieval and updates of strings that share a common prefix.

Are we being asked to query the smallest/largest/median values?

Use a MinHeap or MaxHeap data structure to query for the smallest/largest values (in $O(1)$).

You can use a combination of both MinHeap and MaxHeap if you want to compute the median.

There are time/space costs to maintaining heaps (creating a heap costs $O(n)$ but updates are $O(\log n)$ due to the cost to
heapify
), so this makes more sense when you expect to make multiple queries over the lifetime of your program. The concept of amortised cost is also important when comparing minheaps to algorithms like quickselect. While minheaps offer a consistent update time of $O(\log n)$, quickselect’s time complexity may be more efficient for oneoff or occasional operations. 
Heaps do not maintain sorted order of their elements, and therefore do not offer $O(1)$ access to the $k^{\text{\tiny th}}$ smallest/largest element. If you need to do this, while still allowing for efficient updates, you can use a Binary Search Tree (BST) data structure, possibly augmented with additional information to make it selfbalancing (e.g. AVL Tree or RedBlack Tree).



Does the problem require you to recover the order of some elements or substrings within a sequence?

If you are able to infer partial information about how particular elements are connected, for example, if they come before or after each other or if they overlap (or touch), you can use this information to help build a Directed Graph. Vertices can represent elements, while edges can represent the information inferred from the data. Once this is achieved, it’s often possible to recover the order by running something like a Topological Sort.

In general, graphs seem a good fit for problems involving recovering the order of a sequence, for example, “de Bruijn” graphs are used for the reassembly of denovo DNA sequences from kmers in computational biology.


Are you dealing with data points that are interconnected in some way, needing to generate and test paths or sequences, or wishing to traverse elements in a matrix?
Note
 Problems like this can often be solved using graph or tree data structures, but it’s important to note that these are not the only ways of representing data that is interconnected. For example, rotating a tree 45degrees gives you a grid, and you can use an adjacency matrix to represent a graph, or an array to represent a tree (as is done when implementing heaps). The choice of data structure is not only a tradeoff between memory usage and performance, but can also lead to different ways of thinking about a problem.
 There are also similarities between iteration of sequences and traversals in graphs, trees or statespaces. Traversal is essentially an extended form of iteration, designed to handle dynamic, nonlinear, branching data structures. Different traversal strategies, such as BFS (levelorder) and DFS (preorder, inorder, postorder), offer specific advantages tailored to the nature of the problem at hand.
 An important insight is the fluidity and overlapping between techniques like trees, dynamic programming and pathfinding. This allows for multiple perspectives on a problem and can occasionally offer more elegant or optimal solutions. For example, in Tristan Hume’s “Designing a Tree Diff Algorithm Using Dynamic Programming and A*”, the problem of diffing begins as a decision tree before being transformed into a 2D grid that is amenable to dynamic programming style techniques and eventually the pathfinding algorithm A*.

Have you been asked to find a combination or permutation of elements that match a constraint?

When you have a particular target or constraint that needs to be met, it generally suggests that you will need to go deeply into the search space to find and test each path. This suggests that you will need to use a DepthFirst Search (DFS) approach, however, this will likely result in an explosion of paths that need to be explored and very slow performance as well as bad memory usage. To avoid this, you will need to prune paths earlyon that cannot possibly lead to a solution, and, also due to the potential depth of the search space use a Backtracking approach to (a) invalidate any paths later shown to be impossible, and (b) rollback any modifications to shared state that are made during the search.

DepthFirst Search (DFS) has a lot of overlap with TopDown Memoisation. In fact, topdown memoisation is depthfirst search but with a caching optimisation to avoid recomputing subproblems (e.g. “overlapping subproblems”). Topdown memoisation is generally assumed to be implemented using recursion, but, it can also be implemented iteratively using a Stack.

The choice over whether to implement DepthFirst Search (DFS) using recursion or with iteration and a stack isn’t merely an aesthetic choice but also has practical implications:

Firstly, recursion presents you with limits due to the maximum recursion depth of your chosen programming language.

But more importantly, recursion provides a natural mechanism for backtracking as you can simply return from a recursive call to ‘undo’ the last step. On the other hand, iteration with an explicit stack requires you to manually implement backtracking by either copying data structures into each stack item (so that dropping stack items is effectively backtracking) or by using shared state and carefully rolling back changes to this.

More subjectively, some people prefer recursion for its more intuitive mathematical representation of a problem, while others prefer iteration for its more explicit and easier to debug/trace states.



Do you need to fill or quantify contiguous regions in a matrix?

You can do so using a FloodFill algorithm, which can be either a BreadthFirst Search (BFS) or DepthFirst Search (DFS) traversal from a starting point with the constraint that you can only move to adjacent cells that are the same ‘color’ as the starting point.

You can avoid accidentally revisiting cells by placing them into a Set or Hash Map as you visit them, however, in order to save memory you can also mutate the matrix inplace by marking visited cells with a different ‘color’ (and then, if you wish to, reverting these changes once you have finished).


Is searching for the shortest path between some nodes or doing a ‘levelorder’ traversal your main concern?

Use BreadthFirst Search (BFS).

Note that, BFS uses more memory than DFS because it needs to keep all of the nodes at the current level in memory, while DFS only needs to store the nodes along a single path in the tree/graph as it traverses. Therefore, if memory usage is a concern, DFS could be the better choice.

BFS is also not suitable for weighted graphs which need an algorithm like Dijkstra’s or A*.


Are you being asked to determine whether nodes are connected, to count the number of components in a graph, or to detect cycles?

In an undirected graph? Use UnionFind. Note that UnionFind is not suitable for directed graphs because it inherently deals with undirected equivalence relations; it merges sets and loses the sense of direction between nodes.

In a directed or undirected graph? Use DepthFirst Search (DFS). Unlike UnionFind, DFS is suitable for directed graphs and can also provide information about the shape or structure of components.


Does a solution to the problem result in a very expensive branching program suggesting that we might have an optimisation problem?

If local optimum decisions appear to lead to a global optimum:

Apply a Greedy Algorithm. Note that there’s no guaranteed way of knowing upfront whether local optimum decisions will lead to a global optimum, and it will require testing and analysis to determine whether this is the case.

If the solution to the problem seems to require future information to decide the current step, a greedy algorithm will not be appropriate and you will need to either use DepthFirst Search (DFS) or switch to a Dynamic Programming approach involving TopDown Memoisation.

A greedy algorithm makes decisions at each step based solely on the current state and local considerations, and cannot require backtracking, reconsideration of previous decisions, or deeper exploration of decision paths (like depthfirst search).


If it seems to be possible to compute a solution from a combination of previously computed solutions (e.g. “optimal substructure”) then we can solve it using Dynamic Programming. Dynamic programming is particularly beneficial when the same subproblem reappears multiple times in a computation (e.g. “overlapping subproblems”).

Dynamic programming allows you to save time at the expense of space.

There are two highlevel ways of implementing dynamic programming and the choice depends on the problem:

We can always apply TopDown Memoisation. This is effectively a DFS of the state space, generally implemented using recursive memoised function calls. It’s not always the most efficient way to solve the problem due to the overhead of recursion, but it does avoid the need to compute all possible subproblems first.

If the transitions between subproblems in the state space are computed in a fixed manner we can apply BottomUp Tabulation. This computes all possible subproblems iteratively first and then uses these to compute the final solution, but it avoids the overhead of recursion by computing the subproblems onebyone iteratively and is able to store these in an array for lower memory usage.



🧪🤔📝
The following questions present scenarios that test your ability to select the most appropriate technique.

Greedy Algorithm or Dynamic Programming?

Two Pointers or Sliding Window?

Stack or Queue?

Backtracking or DFS?

UnionFind or DFS?

DFS or Dynamic Programming?