Documentation Index
Fetch the complete documentation index at: https://resources.devweekends.com/llms.txt
Use this file to discover all available pages before exploring further.
What is Recursion?
Recursion is when a function calls itself to solve smaller instances of the same problem. Every recursive solution has two essential parts: a base case (the simplest version of the problem that you can answer directly) and a recursive case (breaking the problem into a smaller version of itself). Think of recursion like Russian nesting dolls (matryoshkas). To open the outermost doll, you open it and find a smaller doll inside. You keep opening smaller dolls until you reach the tiniest one that does not open (the base case). Then you work your way back out, reassembling as you go. The power of recursion is that it lets you write solutions that mirror the structure of the problem. A tree has branches that are themselves trees — so a tree algorithm naturally calls itself on subtrees. A string’s palindrome property depends on whether the inner substring is also a palindrome. When the problem has self-similar substructure, recursion gives you clean, elegant code.Pattern Recognition Checklist
Use Recursion When
- Tree/graph traversal (DFS)
- Nested or hierarchical structures
- Divide and conquer algorithms
- Backtracking problems
- Mathematical sequences (factorial, fibonacci)
- Subproblems have same structure as original
Don't Use When
- Simple iteration suffices
- Large input (stack overflow risk)
- No natural subproblem structure
- Tail recursion can be converted to loop
Recursion Types
| Type | Description | Example |
|---|---|---|
| Linear | One recursive call | Factorial, sum of array |
| Binary | Two recursive calls | Fibonacci, binary tree |
| Multiple | Many recursive calls | N-ary tree traversal |
| Mutual | Functions call each other | Even/odd check |
| Nested | Recursive call in argument | Ackermann function |
| Tail | Recursive call is last operation | Optimizable to iteration |
The Three Laws of Recursion
- A recursive function must have a base case — Without it, you get infinite recursion and a stack overflow. The base case is your exit door.
- A recursive function must change state toward the base case — Each call must make the problem strictly smaller. If you call
f(n)and it eventually callsf(n)again without reducing n, you have an infinite loop. - A recursive function must call itself — This is what makes it recursive rather than just a function with an if-statement.
When to Use
Tree/Graph Traversal
Divide and Conquer
Backtracking
Dynamic Programming
The Recursive Template
Pattern Variations
1. Simple Recursion (Factorial)
The simplest recursion pattern: one base case, one recursive call, linear call depth. Factorial of n isn * (n-1) * ... * 1, which naturally decomposes as n * factorial(n-1).
Complexity: O(n) time, O(n) space (due to n stack frames).
Interview note: Interviewers sometimes ask you to convert this to iteration or tail recursion to demonstrate that you understand the call stack cost.
2. Tree Recursion (Fibonacci)
Without memoization, Fibonacci is the classic example of exponential blowup:fibonacci(n) calls itself twice, creating a binary tree of calls with O(2^n) total work. Adding a memo dictionary collapses this to O(n) by ensuring each subproblem is solved only once.
Without memo: O(2^n) time, O(n) space (call stack depth).
With memo: O(n) time, O(n) space (memo + call stack).
Interview tip: This is the gateway to Dynamic Programming. If an interviewer asks for Fibonacci, follow up by mentioning that the bottom-up DP version (tabulation) avoids recursion entirely and uses O(1) space with two variables.
3. Tail Recursion
In tail recursion, the recursive call is the very last thing the function does — there is no work left after the call returns. This means the current stack frame can be reused (tail-call optimization), converting recursion into a loop behind the scenes. Important caveat: Python does NOT perform tail-call optimization. Neither does Java. Some functional languages (Scheme, Haskell) and C/C++ with compiler flags do. In interviews, mention this awareness — it signals systems-level understanding. Complexity: O(n) time. O(n) space in Python (no TCO), O(1) space in languages with TCO.4. Helper Function Pattern
5. Binary Tree Traversal
Worked LeetCode Problems
These are the recursion problems you will see most often in interviews. Each walkthrough shows the recurrence, a clean implementation, and the complexity analysis you should be ready to defend.LC 509: Fibonacci Number (Why Naive Recursion is Exponential)
Problem: Givenn, return the nth Fibonacci number where F(0) = 0, F(1) = 1, F(n) = F(n-1) + F(n-2).
Why it matters: This is the canonical example of overlapping subproblems. The naive solution is O(2^n) and will time out for n above roughly 35. Memoization collapses it to O(n).
LC 104: Maximum Depth of Binary Tree
Problem: Return the maximum depth (number of nodes from root to deepest leaf) of a binary tree. Why it matters: The simplest “post-order recursion” template. The answer for the current node depends on answers for the subtrees — exactly the recursive shape.LC 226: Invert Binary Tree
Problem: Mirror a binary tree — swap every node’s left and right children. Why it matters: The “do something at every node” template. Recursion handles the structure for you; the only logic is the swap at each node. Famous as the question Max Howell (Homebrew creator) failed at Google.LC 110: Balanced Binary Tree
Problem: Determine if a binary tree is height-balanced (depths of two subtrees of every node differ by at most 1). Why it matters: Demonstrates returning compound information from recursion. Naive solution computes height O(n) at every node giving O(n^2). The trick is to return both height AND balanced-ness in one pass for O(n).LC 124: Binary Tree Maximum Path Sum (Hard)
Problem: Find the maximum sum path between any two nodes (path can start and end anywhere, but each node visited at most once). Why it matters: Classic “global answer + local return” pattern. The function returns the best single-branch path ending at the current node, but updates a global maximum considering paths that go through the node (left + node + right).Classic Problems
| Problem | Pattern | Key Insight |
|---|---|---|
| Factorial | Simple | One recursive call |
| Fibonacci | Tree + Memo | Multiple calls, cache results |
| Tower of Hanoi | Mutual | Move n-1, move 1, move n-1 |
| Tree Traversal | Binary | Left subtree, root, right |
| Permutations | Backtrack | Choose, explore, unchoose |
Common Mistakes
Caveats and Traps
Curated LeetCode Practice List
Grouped by difficulty, with annotations for what each problem teaches.Easy (Build the muscle memory)
| LC# | Problem | What It Teaches |
|---|---|---|
| 509 | Fibonacci Number | Naive recursion vs memoization — the canonical first example. |
| 104 | Maximum Depth of Binary Tree | Cleanest tree recursion template. |
| 226 | Invert Binary Tree | ”Do something at every node” pattern. |
| 100 | Same Tree | Recursion on two trees in lockstep. |
| 101 | Symmetric Tree | Mirrored recursion — pass two pointers, advance them differently. |
| 21 | Merge Two Sorted Lists | Linked-list recursion with two heads. |
| 206 | Reverse Linked List | Recursion vs iteration trade-off. |
| 344 | Reverse String | Two-pointer recursion in O(1) extra (excluding stack). |
Medium (LeetCode bread-and-butter)
| LC# | Problem | What It Teaches |
|---|---|---|
| 50 | Pow(x, n) | Fast exponentiation — O(log n) via x^n equals (x^(n/2))^2. |
| 110 | Balanced Binary Tree | Returning compound info to avoid O(n^2). |
| 543 | Diameter of Binary Tree | Global answer + local return — same shape as LC 124. |
| 105 | Construct Binary Tree from Preorder and Inorder | Recursion with index slicing. |
| 394 | Decode String | Nested recursion driven by parsing. |
| 779 | K-th Symbol in Grammar | Mathematical recursion — no tree involved. |
| 698 | Partition to K Equal Sum Subsets | Recursion with bitmask state. |
| 341 | Flatten Nested List Iterator | Recursive structure flattening. |
Hard (Stretch problems for senior interviews)
| LC# | Problem | What It Teaches |
|---|---|---|
| 124 | Binary Tree Maximum Path Sum | The “return one, record both” pattern. |
| 297 | Serialize and Deserialize Binary Tree | Recursion on both directions. |
| 10 | Regular Expression Matching | Two-branch recursion (handle * as zero matches or one-plus). |
| 44 | Wildcard Matching | Variant of LC 10 — recursion + memo. |
| 1106 | Parsing A Boolean Expression | Nested expression recursion. |
Debugging Recursion
Interview Problems by Company
- Easy
- Medium
- Hard
| Problem | Company | Key Concept |
|---|---|---|
| Fibonacci | All | Binary recursion + memo |
| Reverse String | Amazon | Two-pointer recursion |
| Merge Two Lists | All FAANG | Linked list recursion |
| Maximum Depth of Tree | All | Binary tree recursion |
Interview Tips
How to Explain Your Approach
How to Explain Your Approach
- “I’ll solve this recursively because it breaks into smaller self-similar subproblems.”
- “My base case is [describe when to stop].”
- “My recursive case [describe how to break down].”
- “I combine results by [describe combination].”
- “Time complexity is O(X) because [reasoning].”
Recursion vs Iteration
Recursion vs Iteration
| Aspect | Recursion | Iteration |
|---|---|---|
| Readability | Often cleaner for trees | Better for simple loops |
| Memory | Stack frames | Usually O(1) |
| Performance | Function call overhead | Generally faster |
| When to choose | Complex branching | Simple linear process |
Converting to Iteration
Converting to Iteration
Practice Problems
Pow(x, n)
K-th Symbol
Flatten Nested List
Decode String
Practice Roadmap
Day 1: Basic Recursion
- Solve: Fibonacci, Factorial, Sum of Array
- Focus: Base case + recursive case pattern
Day 2: Tree Recursion
- Solve: Max Depth, Same Tree, Invert Tree
- Focus: Binary tree traversal patterns
Interview Questions
Q1: What are the 'three laws of recursion,' and what goes wrong when you violate each one?
Q1: What are the 'three laws of recursion,' and what goes wrong when you violate each one?
- Law 1: Must have a base case. Without it, the function calls itself indefinitely until the call stack overflows. In Python, you get
RecursionError: maximum recursion depth exceeded. In Java/C++, you get aStackOverflowError. Every recursive function needs at least one condition where it returns without making another recursive call. - Law 2: Must make progress toward the base case. Even with a base case, if the recursive call does not reduce the problem size, you still get infinite recursion. Example:
factorial(n)callingfactorial(n)instead offactorial(n-1). The base case exists but is never reached. A subtler version: a tree DFS that passes the same node due to a missing visited check — technically the base case exists (null node) but the traversal cycles and never hits it. - Law 3: Must call itself. This sounds trivial, but the deeper point is that the recursive call must solve a self-similar subproblem. If the subproblem has a different structure than the original, recursion is not the right tool. For example, if you try to recursively solve something that does not decompose into smaller copies of itself, you end up with awkward, non-recursive logic crammed into a recursive wrapper.
- Practical debugging: when a recursive solution fails, check these in order. First verify the base case returns the correct value (trace with the smallest input). Then verify the recursive call reduces the problem (print the parameters at each call). Then verify the combination step is correct (does the function correctly merge sub-results?).
- Can you have too many base cases? What problems can arise from over-specifying base cases?
- What is the difference between “base case not reached” and “base case returns the wrong value”? How would you distinguish these during debugging?
Q2: Explain the call stack in recursion. What happens in memory when factorial(5) executes?
Q2: Explain the call stack in recursion. What happens in memory when factorial(5) executes?
- When
factorial(5)is called, the runtime allocates a stack frame containing the function’s local variables (here,n=5), the return address (where to resume after this call completes), and space for the return value. factorial(5)callsfactorial(4), which pushes a new frame on top. This continues:factorial(3),factorial(2),factorial(1). At this point, there are 5 frames stacked on the call stack.factorial(1)hits the base case and returns 1. Its frame is popped. Control returns tofactorial(2), which computes2 * 1 = 2and returns. Its frame is popped. This “unwinding” continues:factorial(3)returns 6,factorial(4)returns 24,factorial(5)returns 120.- Each frame typically consumes 50-200 bytes (varies by language and number of local variables). Python’s default stack limit of ~1000 frames means roughly 100-200 KB of stack. Java defaults to ~512 KB to 1 MB. C++ depends on the OS and compiler settings.
- Stack overflow occurs when the number of frames exceeds the stack size.
factorial(10000)in Python will crash unless you increasesys.setrecursionlimit. In Java, you can increase the stack with-Xssflag but this is a band-aid. - This is why tail recursion optimization (TCO) matters: in languages that support it (Scheme, some C++ compilers with
-O2), a tail-recursive call reuses the current frame instead of pushing a new one, turning O(n) space into O(1). Python and Java do NOT support TCO.
- Why does Python not implement tail call optimization? (Hint: Guido’s stance on stack traces.)
- If you need to compute
factorial(100000), how would you do it without hitting stack limits?
Q3: What is memoization, and how does it transform a naive recursive Fibonacci from exponential to linear time?
Q3: What is memoization, and how does it transform a naive recursive Fibonacci from exponential to linear time?
- Naive recursive Fibonacci has time complexity O(2^n) because it recomputes the same subproblems exponentially many times.
fib(5)callsfib(4)andfib(3).fib(4)callsfib(3)andfib(2).fib(3)is computed twice,fib(2)is computed three times, and so on. The recursion tree has ~2^n nodes. - Memoization caches the result of each unique subproblem the first time it is computed. Before making a recursive call, check the cache. If the result exists, return it immediately (O(1)). If not, compute it, store it in the cache, and then return it.
- With memoization, each value
fib(k)is computed exactly once. There are n unique subproblems, and each takes O(1) work (one addition plus a cache lookup). Total time: O(n). Space: O(n) for the cache plus O(n) for the recursion stack. - Implementation: in Python, use a dictionary or
@functools.lru_cache. In Java, use aHashMap<Integer, Long>or an array. The dictionary approach is cleaner; the array approach is faster (no hashing overhead). - Memoization is top-down DP: you start from the original problem and recurse down, caching as you go. The equivalent bottom-up approach builds a table from
fib(0)tofib(n)iteratively, avoiding recursion entirely. Both achieve O(n) time, but bottom-up avoids the recursion stack overhead and can be further optimized to O(1) space by keeping only the last two values.
def fib(n, memo=`)“) without understanding the gotcha that the dictionary persists between calls.Follow-ups:- What is the difference between memoization (top-down) and tabulation (bottom-up)? When would you prefer one over the other?
- Can you optimize the space complexity of Fibonacci to O(1)? How?
Q4: What is tail recursion, and why do some languages optimize it while others do not?
Q4: What is tail recursion, and why do some languages optimize it while others do not?
- A function is tail recursive if the recursive call is the very last operation — no computation happens after the call returns. For example,
factorial_tail(n, acc)returningfactorial_tail(n-1, n*acc)is tail recursive because there is nothing to do after the recursive call. Standardfactorial(n)returningn * factorial(n-1)is NOT tail recursive because the multiplication happens after the recursive call returns. - Tail Call Optimization (TCO): when a compiler detects tail recursion, it can reuse the current stack frame for the next call instead of allocating a new one. The recursive call is effectively compiled into a
gotothat jumps back to the top of the function with updated parameters. This turns O(n) stack space into O(1). - Languages with TCO: Scheme (required by specification), Haskell, Erlang, Scala (with
@tailrecannotation), and C/C++ compilers with optimization flags (-O2). These languages treat recursion as a first-class control flow mechanism. - Languages without TCO: Python (Guido van Rossum explicitly rejected it because it destroys meaningful stack traces for debugging), Java (JVM does not support it, though Project Loom is changing the landscape), JavaScript (ES6 specifies it but only Safari implements it).
- Practical implication: in Python or Java, if your recursion depth can be large, you must either convert to iteration manually or use an explicit stack. You cannot rely on the compiler to optimize it for you.
- To convert any tail-recursive function to iteration: replace the function with a
while Trueloop, replace parameters with variables, and replace the recursive call with variable updates andcontinue.
- Can you convert the standard (non-tail) factorial function into a tail-recursive version? What technique is required?
- If Java does not support TCO, what is the idiomatic way to handle deep recursion in Java?
Q5: When would you convert a recursive solution to an iterative one? Walk me through the conversion process for a concrete example.
Q5: When would you convert a recursive solution to an iterative one? Walk me through the conversion process for a concrete example.
- Convert when: (1) recursion depth could exceed the stack limit (e.g., processing a linked list of 1 million nodes), (2) performance is critical and you want to avoid function call overhead, (3) the problem has a natural iterative structure that recursion obscures (e.g., simple linear traversal).
- Keep recursive when: (1) the problem has natural branching structure (trees, backtracking), (2) the recursive solution is dramatically clearer, (3) depth is bounded (e.g., balanced tree with log(n) depth).
- Conversion process for preorder tree traversal:
- Recursive: call
preorder(node), processnode.val, callpreorder(node.left), callpreorder(node.right). - Iterative: create an explicit stack, push root. While stack is not empty: pop node, process
node.val, pushnode.rightfirst (so left is processed first due to LIFO), then pushnode.left.
- Recursive: call
- The general transformation: every recursive call becomes a push to the stack. Every return becomes a pop. Local variables across recursive calls must be stored in the stack entries (as tuples or custom objects).
- Harder case — postorder: postorder is trickier to convert because you need to process a node AFTER both children. One approach: use two stacks, or use a single stack with a “last visited” pointer to determine whether to process or descend.
- For backtracking problems, the iterative version requires you to explicitly manage the “choice/undo” state. Each stack entry stores not just the node but the full backtracking state (current path, remaining choices). This is why recursive backtracking is almost always preferred for readability.
- How would you iteratively implement postorder traversal? Why is it harder than preorder?
- Is there a performance difference between recursive and iterative DFS in practice? How would you measure it?
Q6: Explain the 'helper function pattern' in recursion. Why do we often need a separate recursive helper instead of making the main function recursive?
Q6: Explain the 'helper function pattern' in recursion. Why do we often need a separate recursive helper instead of making the main function recursive?
- The helper function pattern exists because the public API of a function often has different parameters than what the recursion needs. The user-facing function takes clean input (e.g., a tree root, an array). The recursive helper needs additional state: current index, accumulated path, running sum, visited set, etc.
- Example: generating all subsets of an array. The public API is
subsets(nums) -> List[List[int]]. The recursive helper needsbacktrack(index, current_subset)to track where we are and what we have chosen so far. Exposing these internal details in the public API would be confusing and error-prone for callers. - Common parameters passed to helpers:
index(how far we have progressed),pathorcurrent(what we have built so far),result(accumulator for collecting all valid solutions),visited(what we have already processed), and constraint-specific state likeremaining_sum. - In Python, the helper is typically defined as a nested function (closure) inside the main function. This gives it access to the outer function’s variables (like the result list) without passing them explicitly. In Java/C++, the helper is usually a private method with extra parameters.
- Design principle: the helper function is an implementation detail. The caller should not know or care that recursion is used internally. This separation of interface from implementation is the same principle behind encapsulation in OOP.
- In Python, when would you use a closure (nested function) versus passing state as parameters? What are the trade-offs?
- How does the helper pattern relate to the Accumulator pattern in functional programming?
Q7: You write a recursive solution that works on small inputs but causes a stack overflow on large inputs. What are your options to fix it without changing the algorithm?
Q7: You write a recursive solution that works on small inputs but causes a stack overflow on large inputs. What are your options to fix it without changing the algorithm?
- Option 1: Convert to iterative with an explicit stack. This moves the state from the call stack (fixed size, typically 1-8 MB) to the heap (limited only by available memory, often gigabytes). The logic is the same; you just manage the stack yourself. Works for any recursive algorithm.
- Option 2: Increase the system stack size. In Python:
sys.setrecursionlimit(100000). In Java:-Xss4mJVM flag. In C++: OS-level stack size configuration orulimit -s unlimitedon Linux. This is a quick fix but has limits — you cannot increase indefinitely, and it can mask underlying design issues. - Option 3: Apply tail recursion optimization manually. If the function is tail-recursive (or can be refactored to be), convert it to a while loop with variable updates. This reduces stack usage from O(n) to O(1). Applicable to linear recursion like factorial or linked list traversal.
- Option 4: Add memoization. If the recursion has overlapping subproblems (like Fibonacci), memoization reduces the number of unique recursive calls. This does not reduce the maximum stack depth directly, but it can prevent recomputation that leads to deeper-than-necessary recursion paths.
- Option 5: Convert to bottom-up DP (tabulation). Instead of recursing from the top, build the solution iteratively from the smallest subproblems up. This eliminates recursion entirely and often has better cache locality.
- Option 6: Use trampolining. Instead of making a direct recursive call, return a thunk (a function that wraps the next call). A trampoline loop keeps calling thunks until a final result is produced. This is a technique from functional programming (used in Clojure, Scala) that provides O(1) stack usage for any recursive algorithm.
- In practice, options 1, 3, and 5 are the most commonly used in interviews. Options 2 and 6 are good to mention for bonus points.
- What is trampolining, and how does it achieve O(1) stack usage for arbitrary recursion?
- If you convert a recursive tree traversal to iterative, does the time complexity change?
Q8: Solve Pow(x, n) recursively. What is the naive approach, what is the optimized approach, and what edge cases must you handle?
Q8: Solve Pow(x, n) recursively. What is the naive approach, what is the optimized approach, and what edge cases must you handle?
- Naive approach: multiply x by itself n times. Time: O(n). This is too slow for large n (e.g., n = 2^31 - 1 is over 2 billion multiplications).
- Optimized approach (fast exponentiation): use the identity
x^n = (x^(n/2))^2for even n, andx^n = x * (x^(n/2))^2for odd n. This halves the problem size at each step, giving O(log n) time. - Implementation:
- Critical edge cases: (1) n = 0: any x^0 = 1 (including 0^0, which is defined as 1 by convention in this context). (2) n < 0: compute
x^(-n)and take the reciprocal. (3) n = -2^31 (Integer.MIN_VALUE in Java): negating this causes integer overflow because 2^31 is not representable as a 32-bit signed integer. Solution: handle by computing(1/x) * myPow(x, -(n+1))or cast to long. (4) x = 0 and n < 0: division by zero — return infinity or handle as error. - Why this matters: fast exponentiation is the foundation of modular exponentiation in cryptography (RSA), matrix exponentiation for Fibonacci in O(log n), and binary method for computing large powers in competitive programming.
- Space: O(log n) for the recursion stack. Can be converted to iterative with O(1) space using the “exponentiation by squaring” bit manipulation approach.
n = -2^31.Follow-ups:- How would you implement this iteratively using bit manipulation on the exponent?
- How is fast exponentiation used in RSA encryption? What role does the modular version play?
LeetCode Interview Q&A (Recursion)
These are the LeetCode-style recursion questions you should be able to answer cold. Each has a strong-answer framework, a worked example with full code, three senior follow-ups, common wrong answers, and pointers to related problems.LC 104: Max Depth of Binary Tree -- Recursive vs Iterative
LC 104: Max Depth of Binary Tree -- Recursive vs Iterative
- State the recurrence:
depth(node) = 1 + max(depth(left), depth(right)), withdepth(null) = 0. - Confirm O(n) time (visit each node once) and O(h) space (h is height — O(log n) for balanced, O(n) for skewed).
- Mention you can also do BFS level-by-level if recursion depth is a concern.
[3,9,20,null,null,15,7] the tree is:maxDepth(3) = 1 + max(maxDepth(9), maxDepth(20)) = 1 + max(1, 2) = 3.RecursionError. Either bump the limit with sys.setrecursionlimit(2_000_000) (and increase OS stack size with threading.stack_size) or switch to the iterative BFS version. Most production code uses iteration for exactly this reason.queue and max_d, also thread-safe. The trap is solutions that use a class-level self.max field; those break under concurrency. Always prefer pure functions for tree DP unless you specifically need closure-captured state.return maxDepth(root.left)— ignores the right subtree. Returns the depth of just the leftmost spine.return 1 + maxDepth(root.left) + maxDepth(root.right)— adds the two depths instead of taking the max. Returns garbage.if not root: return 1— off-by-one. Empty tree has depth 0, not 1.
- LC 111 Minimum Depth of Binary Tree — same shape but watch the leaf-vs-null distinction.
- LC 543 Diameter of Binary Tree — same recurrence reused with global tracking.
Convert a Recursive Solution to Iterative Using a Stack
Convert a Recursive Solution to Iterative Using a Stack
- Identify what state must be preserved across recursive calls (parameters, local variables, return-value position).
- Replace the call stack with an explicit
stackof frames — each frame is a tuple/object capturing that state. - Replace each recursive call with a
stack.append(new_frame). Replace each return with astack.pop()and a way to surface the value (often a separateresultvariable). - For “post-order” recursion (need both children’s results before processing parent), mark frames with a state flag indicating whether they have been expanded yet.
- Pushing left before right in preorder — you process right subtree first, which gives a wrong order.
- Using a queue instead of a stack — that gives you BFS, not DFS preorder.
- For post-order, processing on pop without the visited flag — gives the wrong order entirely.
- LC 94 Binary Tree Inorder Traversal — iterative version uses a “go left, push, pop, process, go right” pattern.
- LC 173 Binary Search Tree Iterator — explicit stack used to amortize O(1) per
next().
LC 50: Pow(x, n) -- Recursive with Negative Exponent Handling
LC 50: Pow(x, n) -- Recursive with Negative Exponent Handling
- Note the naive O(n) approach and why it fails for n up to 2^31.
- State the recurrence:
x^n = (x^(n/2))^2if n is even, elsex * (x^(n/2))^2. - Handle three edge cases explicitly: n equals 0 (return 1), n is negative (return 1 / pow(x, -n)), n is
Integer.MIN_VALUE(negating overflows in 32-bit). - Confirm O(log n) time, O(log n) recursion depth.
myPow(2.0, 10):myPow(2, 10) = myPow(2, 5)^2 = (myPow(2, 2)^2 * 2)^2myPow(2, 2) = myPow(2, 1)^2 = (myPow(2, 0) * 2 * 1)^2. Wait — let us trace cleanly.myPow(2, 0) = 1.myPow(2, 1) = 2 * myPow(2, 0)^2 = 2 * 1 = 2.myPow(2, 2) = myPow(2, 1)^2 = 4.myPow(2, 5) = 2 * myPow(2, 2)^2 = 2 * 16 = 32.myPow(2, 10) = myPow(2, 5)^2 = 1024. Correct.
-n overflows because 2^31 is not representable as a 32-bit signed int — you get Integer.MIN_VALUE again. Cast to long before negating: long N = n; if (N less-than 0) { x = 1/x; N = -N; }. Or special-case it: myPow(x, n+1) / x. The interviewer is testing whether you remember signed integer overflow rules.pow(base, exp, mod) in cryptography (RSA) is implemented.infinity (Python’s float('inf')) or document it as undefined behavior. LeetCode’s test cases avoid this combination, but mentioning it shows attention to edge cases.for i in range(n): result *= x— O(n), times out for n above roughly 10^7.- Computing
myPow(x, n//2) * myPow(x, n//2)— duplicates the recursive call! Recompute the same value twice. This is O(n), not O(log n). Always assign to a variable first. - Forgetting the negative-exponent case entirely.
- LC 372 Super Pow — modular exponentiation with the exponent as an array of digits.
- LC 1922 Count Good Numbers — direct application of fast exponentiation under modulo.
LC 124: Binary Tree Maximum Path Sum -- Decide What to Return vs What to Record
LC 124: Binary Tree Maximum Path Sum -- Decide What to Return vs What to Record
- Distinguish between (a) the value the function RETURNS to its parent and (b) the global answer it RECORDS. They are different.
- Return: best single-branch path sum starting at this node and going DOWN one side.
- Record: best path that goes through this node, possibly using BOTH children (left + node + right).
- Drop negative subtrees by clamping to 0:
left = max(gain(left), 0). A negative contribution would only hurt the sum.
[1, 2, 3]:gain(2) = 2 + max(0, 0) = 2. Recordsself_max = max(-inf, 2 + 0 + 0) = 2.gain(3) = 3. Recordsself_max = max(2, 3) = 3.gain(1) = 1 + max(2, 3) = 4. Recordsself_max = max(3, 1 + 2 + 3) = 6.- Answer: 6 (path 2 to 1 to 3).
[-10, 9, 20, null, null, 15, 7]:gain(15) = 15. Record 15.gain(7) = 7. Record 15.gain(20) = 20 + max(15, 7) = 35. Recordmax(15, 20+15+7) = 42.gain(9) = 9. Record 42.gain(-10) = -10 + max(9, 35) = 25. Recordmax(42, -10+9+35) = 42.- Answer: 42 (path 15 to 20 to 7).
self_max to -infinity, NOT to 0. If all values are negative, the best path is the single largest (least negative) node. Initializing to 0 would incorrectly return 0 for input [-3]. This is the most common single-test-case failure for this problem.gain to return both the sum and the path-as-list. When updating self_max, also store the corresponding path. Watch out for the cost: building the path means O(h) work per recursion call instead of O(1), bringing total to O(n * h). For balanced trees this is O(n log n); for skewed trees O(n^2). Most interviewers accept this trade-off when they ask for the actual path.- Returning
node.val + left + rightto the parent. This breaks the invariant — a path cannot fork. The parent can only attach to ONE side. - Forgetting the clamp
max(gain(node.left), 0)— negative subtrees pull down the answer. - Initializing
self_max = 0— fails when all values are negative.
- LC 543 Diameter of Binary Tree — exact same shape, but tracking longest path (count of edges) instead of sum.
- LC 687 Longest Univalue Path — variant with an extra constraint on values.