Skip to main content

Documentation Index

Fetch the complete documentation index at: https://resources.devweekends.com/llms.txt

Use this file to discover all available pages before exploring further.

Recursion Pattern

What is Recursion?

Recursion is when a function calls itself to solve smaller instances of the same problem. Every recursive solution has two essential parts: a base case (the simplest version of the problem that you can answer directly) and a recursive case (breaking the problem into a smaller version of itself). Think of recursion like Russian nesting dolls (matryoshkas). To open the outermost doll, you open it and find a smaller doll inside. You keep opening smaller dolls until you reach the tiniest one that does not open (the base case). Then you work your way back out, reassembling as you go. The power of recursion is that it lets you write solutions that mirror the structure of the problem. A tree has branches that are themselves trees — so a tree algorithm naturally calls itself on subtrees. A string’s palindrome property depends on whether the inner substring is also a palindrome. When the problem has self-similar substructure, recursion gives you clean, elegant code.
Quick Recognition: Problems that can be broken into smaller self-similar subproblems. Look for: tree/graph traversal, nested structures, divide and conquer, “generate all combinations/permutations,” and any problem where the solution for input N depends on solutions for smaller inputs.
LeetCode Pattern Recognition Cheatsheet — if you see these phrases, recursion is almost certainly the right tool:
  • “tree problem” — anything mentioning TreeNode, root, subtree, BST, n-ary tree. Tree structure IS recursion.
  • “divide-and-recurse” — problems where you split the input and process each half independently (merge sort, max subarray crossing midpoint).
  • “natural recurrence” — math problems like Fibonacci, factorial, sum of digits, Pascal’s triangle, Tower of Hanoi. The formula itself is recursive.
  • “subtree” or “subarray” that splits — any problem where the answer for input N depends on the answer for a strictly smaller portion (LC 124 Max Path Sum, LC 543 Diameter, LC 105 Build Tree from Preorder + Inorder).
  • Decode/parse nested structures — expressions with parentheses, JSON-like strings, nested encodings (LC 394 Decode String, LC 726 Number of Atoms).
If the problem says “find the kth thing,” “shortest path,” or “maximum sum with constraint,” think DP or BFS first — recursion alone (without memoization) is usually too slow.

Pattern Recognition Checklist

Use Recursion When

  • Tree/graph traversal (DFS)
  • Nested or hierarchical structures
  • Divide and conquer algorithms
  • Backtracking problems
  • Mathematical sequences (factorial, fibonacci)
  • Subproblems have same structure as original

Don't Use When

  • Simple iteration suffices
  • Large input (stack overflow risk)
  • No natural subproblem structure
  • Tail recursion can be converted to loop

Recursion Types

TypeDescriptionExample
LinearOne recursive callFactorial, sum of array
BinaryTwo recursive callsFibonacci, binary tree
MultipleMany recursive callsN-ary tree traversal
MutualFunctions call each otherEven/odd check
NestedRecursive call in argumentAckermann function
TailRecursive call is last operationOptimizable to iteration

The Three Laws of Recursion

  1. A recursive function must have a base case — Without it, you get infinite recursion and a stack overflow. The base case is your exit door.
  2. A recursive function must change state toward the base case — Each call must make the problem strictly smaller. If you call f(n) and it eventually calls f(n) again without reducing n, you have an infinite loop.
  3. A recursive function must call itself — This is what makes it recursive rather than just a function with an if-statement.
Mental model for designing recursion: Ask yourself three questions in order: (1) What is the simplest input I can solve without recursion? That is my base case. (2) How can I break a larger input into smaller pieces of the same shape? That is my recursive decomposition. (3) How do I combine the results from the smaller pieces? That is my combination step.

When to Use

Tree/Graph Traversal

DFS, preorder, inorder, postorder

Divide and Conquer

Merge sort, quick sort

Backtracking

Permutations, combinations

Dynamic Programming

Top-down memoization

The Recursive Template

def recursive_function(problem):
    # Base case(s) - when to stop
    if is_base_case(problem):
        return base_solution
    
    # Recursive case - break down and solve
    smaller_problem = reduce(problem)
    sub_result = recursive_function(smaller_problem)
    
    # Combine results
    return combine(sub_result, current_work)

Pattern Variations

1. Simple Recursion (Factorial)

The simplest recursion pattern: one base case, one recursive call, linear call depth. Factorial of n is n * (n-1) * ... * 1, which naturally decomposes as n * factorial(n-1). Complexity: O(n) time, O(n) space (due to n stack frames). Interview note: Interviewers sometimes ask you to convert this to iteration or tail recursion to demonstrate that you understand the call stack cost.
def factorial(n):
    """Compute n! using linear recursion.
    
    Trace for n=4:
      factorial(4) = 4 * factorial(3)
                   = 4 * 3 * factorial(2)
                   = 4 * 3 * 2 * factorial(1)
                   = 4 * 3 * 2 * 1 = 24
    """
    if n <= 1:       # Base case: 0! = 1! = 1
        return 1
    return n * factorial(n - 1)  # Recursive case: n! = n * (n-1)!

2. Tree Recursion (Fibonacci)

Without memoization, Fibonacci is the classic example of exponential blowup: fibonacci(n) calls itself twice, creating a binary tree of calls with O(2^n) total work. Adding a memo dictionary collapses this to O(n) by ensuring each subproblem is solved only once. Without memo: O(2^n) time, O(n) space (call stack depth). With memo: O(n) time, O(n) space (memo + call stack). Interview tip: This is the gateway to Dynamic Programming. If an interviewer asks for Fibonacci, follow up by mentioning that the bottom-up DP version (tabulation) avoids recursion entirely and uses O(1) space with two variables.
def fibonacci(n, memo={}):
    """Compute the nth Fibonacci number with memoization.
    
    Without memo, fib(5) would compute fib(3) twice, fib(2) three times, etc.
    The memo ensures each value is computed exactly once.
    
    fib(0)=0, fib(1)=1, fib(2)=1, fib(3)=2, fib(4)=3, fib(5)=5
    """
    if n in memo:
        return memo[n]         # Return cached result -- O(1) lookup
    if n <= 1:
        return n               # Base cases: fib(0)=0, fib(1)=1
    memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
    return memo[n]

3. Tail Recursion

In tail recursion, the recursive call is the very last thing the function does — there is no work left after the call returns. This means the current stack frame can be reused (tail-call optimization), converting recursion into a loop behind the scenes. Important caveat: Python does NOT perform tail-call optimization. Neither does Java. Some functional languages (Scheme, Haskell) and C/C++ with compiler flags do. In interviews, mention this awareness — it signals systems-level understanding. Complexity: O(n) time. O(n) space in Python (no TCO), O(1) space in languages with TCO.
def factorial_tail(n, acc=1):
    """Tail-recursive factorial: the recursive call is the last operation.
    
    The accumulator 'acc' carries the running result forward, so there is
    no work to do after the recursive call returns.
    
    Trace for n=4:
      factorial_tail(4, 1)  -> factorial_tail(3, 4)
                             -> factorial_tail(2, 12)
                             -> factorial_tail(1, 24) -> returns 24
    """
    if n <= 1:
        return acc                        # Base case: return the accumulated result
    return factorial_tail(n - 1, n * acc) # Tail position: nothing happens after this

4. Helper Function Pattern

def solve_problem(input_data):
    """Public API, calls recursive helper"""
    result = []
    
    def helper(state, index):
        if is_complete(state):
            result.append(state[:])
            return
        
        for choice in get_choices(index):
            state.append(choice)
            helper(state, index + 1)
            state.pop()
    
    helper([], 0)
    return result

5. Binary Tree Traversal

class TreeNode:
    def __init__(self, val=0, left=None, right=None):
        self.val = val
        self.left = left
        self.right = right

def inorder_traversal(root):
    """Left -> Root -> Right"""
    if not root:
        return []
    return inorder_traversal(root.left) + [root.val] + inorder_traversal(root.right)

def max_depth(root):
    """Height of binary tree"""
    if not root:
        return 0
    return 1 + max(max_depth(root.left), max_depth(root.right))

Worked LeetCode Problems

These are the recursion problems you will see most often in interviews. Each walkthrough shows the recurrence, a clean implementation, and the complexity analysis you should be ready to defend.

LC 509: Fibonacci Number (Why Naive Recursion is Exponential)

Problem: Given n, return the nth Fibonacci number where F(0) = 0, F(1) = 1, F(n) = F(n-1) + F(n-2). Why it matters: This is the canonical example of overlapping subproblems. The naive solution is O(2^n) and will time out for n above roughly 35. Memoization collapses it to O(n).
# Naive: O(2^n) -- DO NOT submit this without memoization
def fib_naive(n):
    if n <= 1:
        return n
    return fib_naive(n - 1) + fib_naive(n - 2)

# Memoized: O(n) time, O(n) space
def fib(n, memo=None):
    if memo is None:
        memo = {}
    if n in memo:
        return memo[n]
    if n <= 1:
        return n
    memo[n] = fib(n - 1, memo) + fib(n - 2, memo)
    return memo[n]
Why naive is O(2^n): The recursion tree branches twice at every node, depth n. Total nodes is roughly 2^n. With memo, each subproblem is solved once — n unique subproblems, O(1) work each.

LC 104: Maximum Depth of Binary Tree

Problem: Return the maximum depth (number of nodes from root to deepest leaf) of a binary tree. Why it matters: The simplest “post-order recursion” template. The answer for the current node depends on answers for the subtrees — exactly the recursive shape.
def maxDepth(root):
    if not root:                     # Base case: empty subtree has depth 0
        return 0
    left = maxDepth(root.left)       # Solve left subtree
    right = maxDepth(root.right)     # Solve right subtree
    return 1 + max(left, right)      # Combine: 1 for current node + deeper child
# Time: O(n) -- visit each node once. Space: O(h) recursion stack, where h is tree height.

LC 226: Invert Binary Tree

Problem: Mirror a binary tree — swap every node’s left and right children. Why it matters: The “do something at every node” template. Recursion handles the structure for you; the only logic is the swap at each node. Famous as the question Max Howell (Homebrew creator) failed at Google.
def invertTree(root):
    if not root:
        return None
    # Swap children, then recurse (or recurse first, then swap -- both work)
    root.left, root.right = invertTree(root.right), invertTree(root.left)
    return root
# Time: O(n). Space: O(h) recursion stack.

LC 110: Balanced Binary Tree

Problem: Determine if a binary tree is height-balanced (depths of two subtrees of every node differ by at most 1). Why it matters: Demonstrates returning compound information from recursion. Naive solution computes height O(n) at every node giving O(n^2). The trick is to return both height AND balanced-ness in one pass for O(n).
def isBalanced(root):
    def height(node):
        # Returns -1 if unbalanced anywhere, else returns the height
        if not node:
            return 0
        left = height(node.left)
        if left == -1:
            return -1                # Propagate the unbalanced signal upward
        right = height(node.right)
        if right == -1:
            return -1
        if abs(left - right) > 1:
            return -1                # Detect imbalance at this node
        return 1 + max(left, right)
    return height(root) != -1
# Time: O(n). Space: O(h).
The trick: Return a sentinel value (-1 here) to signal failure. This avoids checking balance separately at each node. Always look for “can I return more than one piece of information per call?” — it is the difference between O(n^2) and O(n) for many tree problems.

LC 124: Binary Tree Maximum Path Sum (Hard)

Problem: Find the maximum sum path between any two nodes (path can start and end anywhere, but each node visited at most once). Why it matters: Classic “global answer + local return” pattern. The function returns the best single-branch path ending at the current node, but updates a global maximum considering paths that go through the node (left + node + right).
def maxPathSum(root):
    self_max = [float('-inf')]               # Global answer (mutable container)
    
    def gain(node):
        if not node:
            return 0
        # Only count a subtree if it contributes positively; otherwise drop it
        left = max(gain(node.left), 0)
        right = max(gain(node.right), 0)
        # A path "through" this node uses both children plus the node itself
        self_max[0] = max(self_max[0], node.val + left + right)
        # But returning to caller, we can only keep ONE side (a path cannot fork)
        return node.val + max(left, right)
    
    gain(root)
    return self_max[0]
# Time: O(n). Space: O(h).
The crucial insight: What you return to the parent (single-branch sum) is different from what you record as the global answer (both-branch sum). Conflating these two is the #1 mistake. Many tree DP problems share this structure.

Classic Problems

ProblemPatternKey Insight
FactorialSimpleOne recursive call
FibonacciTree + MemoMultiple calls, cache results
Tower of HanoiMutualMove n-1, move 1, move n-1
Tree TraversalBinaryLeft subtree, root, right
PermutationsBacktrackChoose, explore, unchoose

Common Mistakes

Avoid These Pitfalls:
  1. Missing base case: Leads to infinite recursion and a RecursionError (Python) or StackOverflowError (Java). Always write the base case first.
  2. Base case never reached: The state does not progress toward termination. For example, calling f(n) instead of f(n-1) — the problem size never shrinks.
  3. Wrong return value: Not combining subproblem results correctly. For tree problems, a common mistake is returning the value from only one subtree and ignoring the other.
  4. Modifying shared state without backtracking: In backtracking problems, if you append to a list before recursing, you must pop after recursing. Forgetting this “undo” step produces incorrect results.
  5. Stack overflow on large inputs: Python’s default recursion limit is 1000. For problems with n up to 10^5, either increase the limit with sys.setrecursionlimit(), convert to iteration with an explicit stack, or use bottom-up DP.

Caveats and Traps

LeetCode Recursion Traps That Cost Real Submissions:
  1. Stack overflow for deep recursion (Python default near 1000 frames). A linked list of 10,000 nodes will blow Python’s stack on a “reverse linked list recursive” submission. Convert to iteration with an explicit stack, or use sys.setrecursionlimit(10**6) as a quick fix. Java defaults are larger but you still hit limits around 10k-50k frames depending on JVM settings.
  2. Missing base case equals infinite recursion. Always write the base case BEFORE the recursive case. If the problem asks about a tree, the base case is if not root. If it is an array, the base case is if start greater than or equal to end or if start equals end. Skipping this step is the single most common cause of RecursionError on LeetCode.
  3. Exponential time without memoization for overlapping subproblems. If your recurrence is f(n) = f(n-1) + f(n-2) or similar, naive recursion is O(2^n). Always ask: “am I computing the same subproblem twice?” If yes, add @functools.lru_cache(None) in Python or a HashMap in Java — this is the difference between TLE and AC.
  4. Mutable default arguments in Python (def f(n, memo={})). The {} is created ONCE at function definition time and persists across calls. Submission 1 fills the memo; submission 2 sees stale data; you fail tests that should pass. Always use memo=None and initialize inside.
  5. Tree problems where you return one subtree’s result and forget the other. return maxDepth(root.left) — congratulations, you are computing the depth of just the left spine. Recurse on BOTH children and combine.
Solutions and Patterns That Avoid These Traps:
  1. Iterative-with-stack template (for stack-overflow-prone problems):
    def traverse(root):
        if not root: return []
        result, stack = [], [root]
        while stack:
            node = stack.pop()
            result.append(node.val)
            if node.right: stack.append(node.right)
            if node.left: stack.append(node.left)
        return result
    
  2. Memoization decorator (Python’s one-liner for overlapping subproblems):
    from functools import lru_cache
    @lru_cache(maxsize=None)
    def fib(n):
        return n if n <= 1 else fib(n-1) + fib(n-2)
    
  3. The “decide what to return vs what to record” framework for tree DP: When the recursive function’s return value is different from the “answer” you want, use a closure variable (Python) or class field (Java) to track the global answer, and have the function return only what is needed for the parent’s computation. See LC 124 above.
  4. Always ask three questions before coding: (a) What is the smallest input I can solve directly? (b) If a magic oracle gave me the answer for a smaller input, how would I use it? (c) Are subproblems shared (need memo) or disjoint (no memo)?

Curated LeetCode Practice List

Grouped by difficulty, with annotations for what each problem teaches.

Easy (Build the muscle memory)

LC#ProblemWhat It Teaches
509Fibonacci NumberNaive recursion vs memoization — the canonical first example.
104Maximum Depth of Binary TreeCleanest tree recursion template.
226Invert Binary Tree”Do something at every node” pattern.
100Same TreeRecursion on two trees in lockstep.
101Symmetric TreeMirrored recursion — pass two pointers, advance them differently.
21Merge Two Sorted ListsLinked-list recursion with two heads.
206Reverse Linked ListRecursion vs iteration trade-off.
344Reverse StringTwo-pointer recursion in O(1) extra (excluding stack).

Medium (LeetCode bread-and-butter)

LC#ProblemWhat It Teaches
50Pow(x, n)Fast exponentiation — O(log n) via x^n equals (x^(n/2))^2.
110Balanced Binary TreeReturning compound info to avoid O(n^2).
543Diameter of Binary TreeGlobal answer + local return — same shape as LC 124.
105Construct Binary Tree from Preorder and InorderRecursion with index slicing.
394Decode StringNested recursion driven by parsing.
779K-th Symbol in GrammarMathematical recursion — no tree involved.
698Partition to K Equal Sum SubsetsRecursion with bitmask state.
341Flatten Nested List IteratorRecursive structure flattening.

Hard (Stretch problems for senior interviews)

LC#ProblemWhat It Teaches
124Binary Tree Maximum Path SumThe “return one, record both” pattern.
297Serialize and Deserialize Binary TreeRecursion on both directions.
10Regular Expression MatchingTwo-branch recursion (handle * as zero matches or one-plus).
44Wildcard MatchingVariant of LC 10 — recursion + memo.
1106Parsing A Boolean ExpressionNested expression recursion.

Debugging Recursion

1

Verify Base Case

Is there at least one base case? Does it return correct value?
2

Check Progress

Does each recursive call move toward the base case?
3

Trace Small Input

Manually trace with n=0, n=1, n=2 to verify logic
4

Add Print Statements

Print entry/exit to visualize call stack
5

Consider Memoization

If same subproblems repeat, add caching

Interview Problems by Company

ProblemCompanyKey Concept
FibonacciAllBinary recursion + memo
Reverse StringAmazonTwo-pointer recursion
Merge Two ListsAll FAANGLinked list recursion
Maximum Depth of TreeAllBinary tree recursion

Interview Tips

Script for interviews:
  1. “I’ll solve this recursively because it breaks into smaller self-similar subproblems.”
  2. “My base case is [describe when to stop].”
  3. “My recursive case [describe how to break down].”
  4. “I combine results by [describe combination].”
  5. “Time complexity is O(X) because [reasoning].”
AspectRecursionIteration
ReadabilityOften cleaner for treesBetter for simple loops
MemoryStack framesUsually O(1)
PerformanceFunction call overheadGenerally faster
When to chooseComplex branchingSimple linear process
Many recursive solutions can use explicit stack:
# Recursive
def preorder(root):
    if not root:
        return []
    return [root.val] + preorder(root.left) + preorder(root.right)

# Iterative with stack
def preorder_iterative(root):
    if not root:
        return []
    result, stack = [], [root]
    while stack:
        node = stack.pop()
        result.append(node.val)
        if node.right:
            stack.append(node.right)
        if node.left:
            stack.append(node.left)
    return result

Practice Problems

Pow(x, n)

Fast exponentiation recursion

K-th Symbol

Pattern recognition recursion

Flatten Nested List

Recursive flattening

Decode String

Nested structure recursion

Practice Roadmap

1

Day 1: Basic Recursion

  • Solve: Fibonacci, Factorial, Sum of Array
  • Focus: Base case + recursive case pattern
2

Day 2: Tree Recursion

  • Solve: Max Depth, Same Tree, Invert Tree
  • Focus: Binary tree traversal patterns
3

Day 3: Divide and Conquer

  • Solve: Pow(x, n), Merge Sort
  • Focus: Splitting and combining
4

Day 4: Advanced Recursion

  • Solve: Decode String, Regular Expression
  • Focus: Complex nested structures
Interview Tip: If you are stuck on recursion, ask yourself three questions in order: (1) “What is the smallest input I can solve without recursion?” — that is your base case. (2) “If a magic oracle gave me the answer for a smaller input, how would I use it to solve the current input?” — that is your recursive case. (3) “Am I solving overlapping subproblems?” — if yes, add memoization. This three-step framework works for virtually every recursive problem you will encounter.

Interview Questions

What interviewers are really testing: Whether you understand recursion structurally, not just mechanically — and whether you can diagnose recursive bugs by mapping them to specific violations.Strong Answer:
  • Law 1: Must have a base case. Without it, the function calls itself indefinitely until the call stack overflows. In Python, you get RecursionError: maximum recursion depth exceeded. In Java/C++, you get a StackOverflowError. Every recursive function needs at least one condition where it returns without making another recursive call.
  • Law 2: Must make progress toward the base case. Even with a base case, if the recursive call does not reduce the problem size, you still get infinite recursion. Example: factorial(n) calling factorial(n) instead of factorial(n-1). The base case exists but is never reached. A subtler version: a tree DFS that passes the same node due to a missing visited check — technically the base case exists (null node) but the traversal cycles and never hits it.
  • Law 3: Must call itself. This sounds trivial, but the deeper point is that the recursive call must solve a self-similar subproblem. If the subproblem has a different structure than the original, recursion is not the right tool. For example, if you try to recursively solve something that does not decompose into smaller copies of itself, you end up with awkward, non-recursive logic crammed into a recursive wrapper.
  • Practical debugging: when a recursive solution fails, check these in order. First verify the base case returns the correct value (trace with the smallest input). Then verify the recursive call reduces the problem (print the parameters at each call). Then verify the combination step is correct (does the function correctly merge sub-results?).
Red flag answer: Reciting the three laws without being able to explain what breaks when each is violated, or not being able to give concrete examples of each failure mode.Follow-ups:
  1. Can you have too many base cases? What problems can arise from over-specifying base cases?
  2. What is the difference between “base case not reached” and “base case returns the wrong value”? How would you distinguish these during debugging?
What interviewers are really testing: Whether you have a concrete mental model of how recursion works at the system level — stack frames, return addresses, and why deep recursion causes stack overflow.Strong Answer:
  • When factorial(5) is called, the runtime allocates a stack frame containing the function’s local variables (here, n=5), the return address (where to resume after this call completes), and space for the return value.
  • factorial(5) calls factorial(4), which pushes a new frame on top. This continues: factorial(3), factorial(2), factorial(1). At this point, there are 5 frames stacked on the call stack.
  • factorial(1) hits the base case and returns 1. Its frame is popped. Control returns to factorial(2), which computes 2 * 1 = 2 and returns. Its frame is popped. This “unwinding” continues: factorial(3) returns 6, factorial(4) returns 24, factorial(5) returns 120.
  • Each frame typically consumes 50-200 bytes (varies by language and number of local variables). Python’s default stack limit of ~1000 frames means roughly 100-200 KB of stack. Java defaults to ~512 KB to 1 MB. C++ depends on the OS and compiler settings.
  • Stack overflow occurs when the number of frames exceeds the stack size. factorial(10000) in Python will crash unless you increase sys.setrecursionlimit. In Java, you can increase the stack with -Xss flag but this is a band-aid.
  • This is why tail recursion optimization (TCO) matters: in languages that support it (Scheme, some C++ compilers with -O2), a tail-recursive call reuses the current frame instead of pushing a new one, turning O(n) space into O(1). Python and Java do NOT support TCO.
Red flag answer: Not knowing what a stack frame is, or thinking recursion uses heap memory instead of stack memory. Also a red flag: claiming Python supports tail call optimization.Follow-ups:
  1. Why does Python not implement tail call optimization? (Hint: Guido’s stance on stack traces.)
  2. If you need to compute factorial(100000), how would you do it without hitting stack limits?
What interviewers are really testing: Whether you understand overlapping subproblems and can explain the mechanical transformation from brute-force recursion to top-down dynamic programming.Strong Answer:
  • Naive recursive Fibonacci has time complexity O(2^n) because it recomputes the same subproblems exponentially many times. fib(5) calls fib(4) and fib(3). fib(4) calls fib(3) and fib(2). fib(3) is computed twice, fib(2) is computed three times, and so on. The recursion tree has ~2^n nodes.
  • Memoization caches the result of each unique subproblem the first time it is computed. Before making a recursive call, check the cache. If the result exists, return it immediately (O(1)). If not, compute it, store it in the cache, and then return it.
  • With memoization, each value fib(k) is computed exactly once. There are n unique subproblems, and each takes O(1) work (one addition plus a cache lookup). Total time: O(n). Space: O(n) for the cache plus O(n) for the recursion stack.
  • Implementation: in Python, use a dictionary or @functools.lru_cache. In Java, use a HashMap<Integer, Long> or an array. The dictionary approach is cleaner; the array approach is faster (no hashing overhead).
  • Memoization is top-down DP: you start from the original problem and recurse down, caching as you go. The equivalent bottom-up approach builds a table from fib(0) to fib(n) iteratively, avoiding recursion entirely. Both achieve O(n) time, but bottom-up avoids the recursion stack overhead and can be further optimized to O(1) space by keeping only the last two values.
Red flag answer: Not knowing the naive complexity is O(2^n), or confusing memoization with tabulation (they are related but distinct), or using a mutable default argument in Python (def fib(n, memo=`)“) without understanding the gotcha that the dictionary persists between calls.Follow-ups:
  1. What is the difference between memoization (top-down) and tabulation (bottom-up)? When would you prefer one over the other?
  2. Can you optimize the space complexity of Fibonacci to O(1)? How?
What interviewers are really testing: Depth of understanding about language runtime behavior and the relationship between recursion and iteration at the machine level.Strong Answer:
  • A function is tail recursive if the recursive call is the very last operation — no computation happens after the call returns. For example, factorial_tail(n, acc) returning factorial_tail(n-1, n*acc) is tail recursive because there is nothing to do after the recursive call. Standard factorial(n) returning n * factorial(n-1) is NOT tail recursive because the multiplication happens after the recursive call returns.
  • Tail Call Optimization (TCO): when a compiler detects tail recursion, it can reuse the current stack frame for the next call instead of allocating a new one. The recursive call is effectively compiled into a goto that jumps back to the top of the function with updated parameters. This turns O(n) stack space into O(1).
  • Languages with TCO: Scheme (required by specification), Haskell, Erlang, Scala (with @tailrec annotation), and C/C++ compilers with optimization flags (-O2). These languages treat recursion as a first-class control flow mechanism.
  • Languages without TCO: Python (Guido van Rossum explicitly rejected it because it destroys meaningful stack traces for debugging), Java (JVM does not support it, though Project Loom is changing the landscape), JavaScript (ES6 specifies it but only Safari implements it).
  • Practical implication: in Python or Java, if your recursion depth can be large, you must either convert to iteration manually or use an explicit stack. You cannot rely on the compiler to optimize it for you.
  • To convert any tail-recursive function to iteration: replace the function with a while True loop, replace parameters with variables, and replace the recursive call with variable updates and continue.
Red flag answer: Claiming that tail recursion is “just recursion at the end of the function” without understanding the TCO optimization, or not knowing which mainstream languages support it.Follow-ups:
  1. Can you convert the standard (non-tail) factorial function into a tail-recursive version? What technique is required?
  2. If Java does not support TCO, what is the idiomatic way to handle deep recursion in Java?
What interviewers are really testing: Practical engineering judgment about when recursion helps and when it hurts, plus the ability to mechanically transform between the two forms.Strong Answer:
  • Convert when: (1) recursion depth could exceed the stack limit (e.g., processing a linked list of 1 million nodes), (2) performance is critical and you want to avoid function call overhead, (3) the problem has a natural iterative structure that recursion obscures (e.g., simple linear traversal).
  • Keep recursive when: (1) the problem has natural branching structure (trees, backtracking), (2) the recursive solution is dramatically clearer, (3) depth is bounded (e.g., balanced tree with log(n) depth).
  • Conversion process for preorder tree traversal:
    • Recursive: call preorder(node), process node.val, call preorder(node.left), call preorder(node.right).
    • Iterative: create an explicit stack, push root. While stack is not empty: pop node, process node.val, push node.right first (so left is processed first due to LIFO), then push node.left.
  • The general transformation: every recursive call becomes a push to the stack. Every return becomes a pop. Local variables across recursive calls must be stored in the stack entries (as tuples or custom objects).
  • Harder case — postorder: postorder is trickier to convert because you need to process a node AFTER both children. One approach: use two stacks, or use a single stack with a “last visited” pointer to determine whether to process or descend.
  • For backtracking problems, the iterative version requires you to explicitly manage the “choice/undo” state. Each stack entry stores not just the node but the full backtracking state (current path, remaining choices). This is why recursive backtracking is almost always preferred for readability.
Red flag answer: Always converting to iteration regardless of context (“recursion is bad”), or being unable to perform the actual conversion when asked.Follow-ups:
  1. How would you iteratively implement postorder traversal? Why is it harder than preorder?
  2. Is there a performance difference between recursive and iterative DFS in practice? How would you measure it?
What interviewers are really testing: Clean API design sensibility and understanding of state management in recursion — this pattern appears in nearly every backtracking and tree problem.Strong Answer:
  • The helper function pattern exists because the public API of a function often has different parameters than what the recursion needs. The user-facing function takes clean input (e.g., a tree root, an array). The recursive helper needs additional state: current index, accumulated path, running sum, visited set, etc.
  • Example: generating all subsets of an array. The public API is subsets(nums) -> List[List[int]]. The recursive helper needs backtrack(index, current_subset) to track where we are and what we have chosen so far. Exposing these internal details in the public API would be confusing and error-prone for callers.
  • Common parameters passed to helpers: index (how far we have progressed), path or current (what we have built so far), result (accumulator for collecting all valid solutions), visited (what we have already processed), and constraint-specific state like remaining_sum.
  • In Python, the helper is typically defined as a nested function (closure) inside the main function. This gives it access to the outer function’s variables (like the result list) without passing them explicitly. In Java/C++, the helper is usually a private method with extra parameters.
  • Design principle: the helper function is an implementation detail. The caller should not know or care that recursion is used internally. This separation of interface from implementation is the same principle behind encapsulation in OOP.
Red flag answer: Not knowing why the helper is needed (“just make the main function recursive”), or passing the result list as a parameter when a closure would be cleaner (language-dependent — acceptable in Java but unnecessary in Python).Follow-ups:
  1. In Python, when would you use a closure (nested function) versus passing state as parameters? What are the trade-offs?
  2. How does the helper pattern relate to the Accumulator pattern in functional programming?
What interviewers are really testing: Practical problem-solving under constraints — can you go beyond “just increase the stack size” and articulate multiple strategies with their trade-offs?Strong Answer:
  • Option 1: Convert to iterative with an explicit stack. This moves the state from the call stack (fixed size, typically 1-8 MB) to the heap (limited only by available memory, often gigabytes). The logic is the same; you just manage the stack yourself. Works for any recursive algorithm.
  • Option 2: Increase the system stack size. In Python: sys.setrecursionlimit(100000). In Java: -Xss4m JVM flag. In C++: OS-level stack size configuration or ulimit -s unlimited on Linux. This is a quick fix but has limits — you cannot increase indefinitely, and it can mask underlying design issues.
  • Option 3: Apply tail recursion optimization manually. If the function is tail-recursive (or can be refactored to be), convert it to a while loop with variable updates. This reduces stack usage from O(n) to O(1). Applicable to linear recursion like factorial or linked list traversal.
  • Option 4: Add memoization. If the recursion has overlapping subproblems (like Fibonacci), memoization reduces the number of unique recursive calls. This does not reduce the maximum stack depth directly, but it can prevent recomputation that leads to deeper-than-necessary recursion paths.
  • Option 5: Convert to bottom-up DP (tabulation). Instead of recursing from the top, build the solution iteratively from the smallest subproblems up. This eliminates recursion entirely and often has better cache locality.
  • Option 6: Use trampolining. Instead of making a direct recursive call, return a thunk (a function that wraps the next call). A trampoline loop keeps calling thunks until a final result is produced. This is a technique from functional programming (used in Clojure, Scala) that provides O(1) stack usage for any recursive algorithm.
  • In practice, options 1, 3, and 5 are the most commonly used in interviews. Options 2 and 6 are good to mention for bonus points.
Red flag answer: Only knowing “increase recursion limit” without understanding its limitations, or being unable to describe how to convert recursion to iteration.Follow-ups:
  1. What is trampolining, and how does it achieve O(1) stack usage for arbitrary recursion?
  2. If you convert a recursive tree traversal to iterative, does the time complexity change?
What interviewers are really testing: Whether you can optimize from O(n) to O(log n) recursion using divide-and-conquer, and whether you handle negative exponents, zero, and integer overflow edge cases.Strong Answer:
  • Naive approach: multiply x by itself n times. Time: O(n). This is too slow for large n (e.g., n = 2^31 - 1 is over 2 billion multiplications).
  • Optimized approach (fast exponentiation): use the identity x^n = (x^(n/2))^2 for even n, and x^n = x * (x^(n/2))^2 for odd n. This halves the problem size at each step, giving O(log n) time.
  • Implementation:
    def myPow(x, n):
        if n == 0: return 1
        if n < 0: return 1 / myPow(x, -n)
        half = myPow(x, n // 2)
        if n % 2 == 0:
            return half * half
        else:
            return x * half * half
    
  • Critical edge cases: (1) n = 0: any x^0 = 1 (including 0^0, which is defined as 1 by convention in this context). (2) n < 0: compute x^(-n) and take the reciprocal. (3) n = -2^31 (Integer.MIN_VALUE in Java): negating this causes integer overflow because 2^31 is not representable as a 32-bit signed integer. Solution: handle by computing (1/x) * myPow(x, -(n+1)) or cast to long. (4) x = 0 and n < 0: division by zero — return infinity or handle as error.
  • Why this matters: fast exponentiation is the foundation of modular exponentiation in cryptography (RSA), matrix exponentiation for Fibonacci in O(log n), and binary method for computing large powers in competitive programming.
  • Space: O(log n) for the recursion stack. Can be converted to iterative with O(1) space using the “exponentiation by squaring” bit manipulation approach.
Red flag answer: Only knowing the O(n) approach, or forgetting to handle negative exponents, or causing integer overflow with n = -2^31.Follow-ups:
  1. How would you implement this iteratively using bit manipulation on the exponent?
  2. How is fast exponentiation used in RSA encryption? What role does the modular version play?

LeetCode Interview Q&A (Recursion)

These are the LeetCode-style recursion questions you should be able to answer cold. Each has a strong-answer framework, a worked example with full code, three senior follow-ups, common wrong answers, and pointers to related problems.
Strong Answer Framework:
  1. State the recurrence: depth(node) = 1 + max(depth(left), depth(right)), with depth(null) = 0.
  2. Confirm O(n) time (visit each node once) and O(h) space (h is height — O(log n) for balanced, O(n) for skewed).
  3. Mention you can also do BFS level-by-level if recursion depth is a concern.
Real LeetCode Example — Full Walkthrough:
# Recursive (cleanest)
def maxDepth(root):
    if not root:
        return 0
    return 1 + max(maxDepth(root.left), maxDepth(root.right))

# Iterative with stack (level-order BFS variant) -- avoids recursion stack issues
from collections import deque
def maxDepth_iter(root):
    if not root:
        return 0
    queue = deque([(root, 1)])
    max_d = 0
    while queue:
        node, depth = queue.popleft()
        max_d = max(max_d, depth)
        if node.left:  queue.append((node.left,  depth + 1))
        if node.right: queue.append((node.right, depth + 1))
    return max_d
For input [3,9,20,null,null,15,7] the tree is:
    3
   / \
  9  20
     / \
    15  7
Recursive: maxDepth(3) = 1 + max(maxDepth(9), maxDepth(20)) = 1 + max(1, 2) = 3.
Senior Follow-up 1 — “What if the tree has 1 million nodes and is a left-skewed chain?” Recursion depth becomes 1 million, which blows Python’s default 1000-frame limit and causes RecursionError. Either bump the limit with sys.setrecursionlimit(2_000_000) (and increase OS stack size with threading.stack_size) or switch to the iterative BFS version. Most production code uses iteration for exactly this reason.
Senior Follow-up 2 — “How do you compute this without modifying global state in a multi-threaded environment?” The pure recursive version above has zero shared state — it returns a value through the call stack. The iterative version uses a local queue and max_d, also thread-safe. The trap is solutions that use a class-level self.max field; those break under concurrency. Always prefer pure functions for tree DP unless you specifically need closure-captured state.
Senior Follow-up 3 — “Can you compute max depth in O(1) extra space?” Not in general, because you need at least O(h) to track where you are in the tree. Morris traversal (using right-pointer threading) achieves O(1) extra space for traversal, but adapting it to compute depth specifically is awkward. The honest answer is: theoretical interest aside, BFS or DFS with O(h) space is the right answer for any real codebase.
Common Wrong Answers:
  • return maxDepth(root.left) — ignores the right subtree. Returns the depth of just the leftmost spine.
  • return 1 + maxDepth(root.left) + maxDepth(root.right) — adds the two depths instead of taking the max. Returns garbage.
  • if not root: return 1 — off-by-one. Empty tree has depth 0, not 1.
Further Reading:
Strong Answer Framework:
  1. Identify what state must be preserved across recursive calls (parameters, local variables, return-value position).
  2. Replace the call stack with an explicit stack of frames — each frame is a tuple/object capturing that state.
  3. Replace each recursive call with a stack.append(new_frame). Replace each return with a stack.pop() and a way to surface the value (often a separate result variable).
  4. For “post-order” recursion (need both children’s results before processing parent), mark frames with a state flag indicating whether they have been expanded yet.
Real LeetCode Example — LC 144 Binary Tree Preorder Traversal:
# Recursive
def preorder(root):
    result = []
    def dfs(node):
        if not node: return
        result.append(node.val)
        dfs(node.left)
        dfs(node.right)
    dfs(root)
    return result

# Iterative with explicit stack
def preorder_iter(root):
    if not root: return []
    result, stack = [], [root]
    while stack:
        node = stack.pop()
        result.append(node.val)
        # Push right BEFORE left so left is processed first (LIFO)
        if node.right: stack.append(node.right)
        if node.left:  stack.append(node.left)
    return result
For post-order (LC 145), it is trickier because you need to process the node AFTER both children. The trick is to push frames with a “visited” flag:
def postorder_iter(root):
    if not root: return []
    result, stack = [], [(root, False)]
    while stack:
        node, visited = stack.pop()
        if visited:
            result.append(node.val)            # Post-order: process now
        else:
            stack.append((node, True))         # Re-push as visited
            if node.right: stack.append((node.right, False))
            if node.left:  stack.append((node.left,  False))
    return result
Senior Follow-up 1 — “Why is post-order iterative harder than pre-order?” Pre-order processes the node BEFORE recursing — it is naturally compatible with stack semantics (push children, pop and process). Post-order processes the node AFTER both children, so you cannot just process on pop. You either need the visited-flag trick above, or use two stacks (push to stack1, pop and push to stack2, then drain stack2 in reverse). Most interviewers prefer the visited-flag approach because it generalizes cleanly.
Senior Follow-up 2 — “When would you NOT convert recursion to iteration?” When the recursion is naturally branching with multiple recursive calls per frame (like backtracking), the iterative version explodes in complexity. You end up reimplementing the call stack with all its bookkeeping (current path, choice index, undo state). Recursive backtracking is almost always cleaner than iterative. Convert to iteration only when stack depth is the bottleneck.
Senior Follow-up 3 — “What is Morris traversal and when would you use it?” Morris traversal achieves O(1) extra space by temporarily modifying the tree — using right-null pointers of leaves to thread back to ancestors. Useful for: (a) memory-constrained embedded systems, (b) interview bonus points. Trade-off: it temporarily mutates the tree (problematic if other threads read concurrently). I would not use it in production unless space is genuinely the bottleneck.
Common Wrong Answers:
  • Pushing left before right in preorder — you process right subtree first, which gives a wrong order.
  • Using a queue instead of a stack — that gives you BFS, not DFS preorder.
  • For post-order, processing on pop without the visited flag — gives the wrong order entirely.
Further Reading:
Strong Answer Framework:
  1. Note the naive O(n) approach and why it fails for n up to 2^31.
  2. State the recurrence: x^n = (x^(n/2))^2 if n is even, else x * (x^(n/2))^2.
  3. Handle three edge cases explicitly: n equals 0 (return 1), n is negative (return 1 / pow(x, -n)), n is Integer.MIN_VALUE (negating overflows in 32-bit).
  4. Confirm O(log n) time, O(log n) recursion depth.
Real LeetCode Example — Full Walkthrough:
def myPow(x, n):
    # Edge case 1: zero exponent
    if n == 0:
        return 1.0
    # Edge case 2: negative exponent
    # In Python, integer arithmetic is unbounded, so -n is safe even for INT_MIN
    if n < 0:
        return 1.0 / myPow(x, -n)
    # Recursive case: halve the exponent
    half = myPow(x, n // 2)
    if n % 2 == 0:
        return half * half          # Even: x^n = (x^(n/2))^2
    else:
        return x * half * half      # Odd: x^n = x * (x^(n/2))^2

# Time: O(log n) -- exponent halves each call.
# Space: O(log n) recursion stack.
For myPow(2.0, 10):
  • myPow(2, 10) = myPow(2, 5)^2 = (myPow(2, 2)^2 * 2)^2
  • myPow(2, 2) = myPow(2, 1)^2 = (myPow(2, 0) * 2 * 1)^2. Wait — let us trace cleanly.
  • myPow(2, 0) = 1. myPow(2, 1) = 2 * myPow(2, 0)^2 = 2 * 1 = 2. myPow(2, 2) = myPow(2, 1)^2 = 4. myPow(2, 5) = 2 * myPow(2, 2)^2 = 2 * 16 = 32. myPow(2, 10) = myPow(2, 5)^2 = 1024. Correct.
Senior Follow-up 1 — “In Java, what happens if n equals Integer.MIN_VALUE (-2^31)? How do you handle it?” -n overflows because 2^31 is not representable as a 32-bit signed int — you get Integer.MIN_VALUE again. Cast to long before negating: long N = n; if (N less-than 0) { x = 1/x; N = -N; }. Or special-case it: myPow(x, n+1) / x. The interviewer is testing whether you remember signed integer overflow rules.
Senior Follow-up 2 — “How would you do this iteratively with O(1) space?” Iterative exponentiation by squaring uses bits of the exponent:
def myPow_iter(x, n):
    if n less-than 0: x, n = 1/x, -n
    result = 1.0
    while n greater-than 0:
        if n bitwise-AND 1: result *= x
        x *= x
        n approx-shift-right 1
    return result
This uses O(1) space, runs in O(log n). For interview bonus, mention that this is exactly how pow(base, exp, mod) in cryptography (RSA) is implemented.
Senior Follow-up 3 — “What if x is 0 and n is negative?” You divide by zero. Most interviewers expect you to either return infinity (Python’s float('inf')) or document it as undefined behavior. LeetCode’s test cases avoid this combination, but mentioning it shows attention to edge cases.
Common Wrong Answers:
  • for i in range(n): result *= x — O(n), times out for n above roughly 10^7.
  • Computing myPow(x, n//2) * myPow(x, n//2) — duplicates the recursive call! Recompute the same value twice. This is O(n), not O(log n). Always assign to a variable first.
  • Forgetting the negative-exponent case entirely.
Further Reading:
  • LC 372 Super Pow — modular exponentiation with the exponent as an array of digits.
  • LC 1922 Count Good Numbers — direct application of fast exponentiation under modulo.
Strong Answer Framework:
  1. Distinguish between (a) the value the function RETURNS to its parent and (b) the global answer it RECORDS. They are different.
  2. Return: best single-branch path sum starting at this node and going DOWN one side.
  3. Record: best path that goes through this node, possibly using BOTH children (left + node + right).
  4. Drop negative subtrees by clamping to 0: left = max(gain(left), 0). A negative contribution would only hurt the sum.
Real LeetCode Example — Full Walkthrough:
def maxPathSum(root):
    self_max = [float('-inf')]                    # Use list for mutable closure
    
    def gain(node):
        if not node:
            return 0
        # Recurse, dropping negative contributions
        left = max(gain(node.left), 0)
        right = max(gain(node.right), 0)
        # RECORD: best path that PASSES THROUGH this node (uses both children)
        self_max[0] = max(self_max[0], node.val + left + right)
        # RETURN: best path that EXTENDS UP from this node (one child only)
        # The parent can attach to at most one of our subtrees
        return node.val + max(left, right)
    
    gain(root)
    return self_max[0]
For [1, 2, 3]:
  • gain(2) = 2 + max(0, 0) = 2. Records self_max = max(-inf, 2 + 0 + 0) = 2.
  • gain(3) = 3. Records self_max = max(2, 3) = 3.
  • gain(1) = 1 + max(2, 3) = 4. Records self_max = max(3, 1 + 2 + 3) = 6.
  • Answer: 6 (path 2 to 1 to 3).
For [-10, 9, 20, null, null, 15, 7]:
  • gain(15) = 15. Record 15.
  • gain(7) = 7. Record 15.
  • gain(20) = 20 + max(15, 7) = 35. Record max(15, 20+15+7) = 42.
  • gain(9) = 9. Record 42.
  • gain(-10) = -10 + max(9, 35) = 25. Record max(42, -10+9+35) = 42.
  • Answer: 42 (path 15 to 20 to 7).
Senior Follow-up 1 — “Why do we clamp negative subtrees to 0 instead of always including them?” Including a negative subtree decreases the path sum. We are free to NOT extend into a subtree — the path can end at the current node. Clamping to 0 means “if the subtree hurts, just do not go there.” This is what makes the algorithm correct in the presence of negative values.
Senior Follow-up 2 — “What if all nodes are negative? Edge case?” Initialize self_max to -infinity, NOT to 0. If all values are negative, the best path is the single largest (least negative) node. Initializing to 0 would incorrectly return 0 for input [-3]. This is the most common single-test-case failure for this problem.
Senior Follow-up 3 — “How would you also return the path itself, not just the sum?” Modify gain to return both the sum and the path-as-list. When updating self_max, also store the corresponding path. Watch out for the cost: building the path means O(h) work per recursion call instead of O(1), bringing total to O(n * h). For balanced trees this is O(n log n); for skewed trees O(n^2). Most interviewers accept this trade-off when they ask for the actual path.
Common Wrong Answers:
  • Returning node.val + left + right to the parent. This breaks the invariant — a path cannot fork. The parent can only attach to ONE side.
  • Forgetting the clamp max(gain(node.left), 0) — negative subtrees pull down the answer.
  • Initializing self_max = 0 — fails when all values are negative.
Further Reading: