Skip to main content

Documentation Index

Fetch the complete documentation index at: https://resources.devweekends.com/llms.txt

Use this file to discover all available pages before exploring further.

Divide and Conquer Pattern

What is Divide and Conquer?

Divide and Conquer splits a problem into smaller subproblems, solves them recursively, and combines results. Unlike DP, subproblems are typically non-overlapping. The analogy is managing a large team project. You do not try to do everything yourself. Instead, you split the work into independent pieces, delegate each piece to a sub-team, and then combine their results into the final deliverable. If each sub-team does the same thing recursively, you have divide and conquer. The three steps are always the same:
  1. Divide: Break the problem into smaller instances of the same problem.
  2. Conquer: Solve each sub-instance recursively (base case handles trivially small instances).
  3. Combine: Merge sub-solutions into the solution for the original problem.
The power comes from the fact that dividing in half and combining in linear time gives O(n log n) total — the recurrence T(n) = 2T(n/2) + O(n) is exactly the Master Theorem case that yields O(n log n). This is why merge sort, quicksort (average case), and many tree algorithms hit that sweet spot. Key distinction from DP: Divide and conquer subproblems do not overlap — each sub-instance works on a disjoint portion of the data. If subproblems overlap (same state computed multiple times), you need memoization, which makes it DP.
LeetCode Pattern Recognition Cheatsheet — if you see these phrases, divide and conquer is likely the right approach:
  • “merge sort / quick sort” — LC 912 (Sort an Array). The interviewer wants you to implement one from scratch, often with a follow-up about stability or worst-case behavior.
  • “find majority element” — LC 169. Boyer-Moore voting is O(n) and the canonical solution, but D and C is a valid O(n log n) alternative the interviewer may ask about.
  • “max subarray” — LC 53. DP (Kadane) is O(n) and preferred, but interviewers sometimes ask the D and C solution to test the paradigm.
  • “search in 2D matrix” — LC 240. Eliminate a row or column at each step from a corner — O(m + n).
  • “kth largest / smallest” — LC 215, LC 973. Quickselect (D and C variant) gives O(n) average.
  • “Pow(x, n)” or fast exponentiation — LC 50. Halve the exponent each step, O(log n).
  • “closest pair of points” — the geometry classic. O(n log n) via plane splitting and a strip merge.
  • “count inversions” / “count smaller after self” — LC 315, LC 493. Modified merge sort counts cross-half inversions during merge.
If the problem can be solved by computing on each half independently and combining, D and C fits. If results from one half affect what you need to compute for the other, you probably want DP.

When to Use

Sorting

Merge Sort, Quick Sort

Searching

Binary Search, finding kth element

Tree Problems

Height, diameter, subtree queries

Array Problems

Maximum subarray, closest pair

The D and C Template

Every D and C solution follows this exact skeleton. The art is in choosing how to split and how to combine.
def divide_and_conquer(problem):
    # BASE CASE: problem is small enough to solve directly (e.g., 0 or 1 element)
    if is_trivial(problem):
        return solve_directly(problem)
    
    # DIVIDE: split into smaller sub-instances (usually halves)
    subproblems = split(problem)
    
    # CONQUER: recursively solve each sub-instance
    sub_results = [divide_and_conquer(sub) for sub in subproblems]
    
    # COMBINE: merge sub-solutions into the full solution
    # This is often where the real work happens (e.g., merge step in merge sort)
    return merge(sub_results)

Pattern Variations

1. Merge Sort

The quintessential D and C algorithm. Divide the array in half, recursively sort each half, then merge the two sorted halves. The merge step is where comparisons happen — it walks two pointers across the sorted halves and always picks the smaller element. Time: O(n log n) in all cases (worst, average, best). Space: O(n) for the temporary merge array. Stable: equal elements preserve their relative order — this is why merge sort is preferred when stability matters. Interview insight: Merge sort’s merge step is reused in many problems: count inversions, count smaller numbers after self, sort linked lists. Recognizing “modified merge sort” is a valuable skill.
def merge_sort(arr):
    """O(n log n) stable sort"""
    if len(arr) <= 1:        # BASE CASE: a single element is already sorted
        return arr
    
    # DIVIDE: split array in half
    mid = len(arr) // 2
    left = merge_sort(arr[:mid])    # CONQUER: recursively sort left half
    right = merge_sort(arr[mid:])   # CONQUER: recursively sort right half
    
    # COMBINE: merge two sorted halves
    return merge(left, right)

def merge(left, right):
    """Merge two sorted arrays into one sorted array in O(n)"""
    result = []
    i = j = 0
    
    # Compare heads of both halves; pick the smaller one each time
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:    # <= ensures stability (left element wins ties)
            result.append(left[i])
            i += 1
        else:
            result.append(right[j])
            j += 1
    
    # Append remaining elements (one of these will be empty)
    result.extend(left[i:])
    result.extend(right[j:])
    return result

2. Quick Sort

Quick sort works differently from merge sort: all the work happens in the divide step (partitioning), not the combine step. After partitioning, the pivot is in its final position, and elements to its left are smaller while elements to its right are larger. Time: O(n log n) average, O(n^2) worst case (when the pivot is consistently the smallest or largest element — e.g., already sorted array with last-element pivot). Space: O(log n) average for the recursion stack. Not stable. Interview tip: If an interviewer asks about worst-case mitigation, mention randomized pivot selection (pick a random index, swap it to high before partitioning). This makes the O(n^2) case astronomically unlikely.
def quick_sort(arr, low, high):
    """O(n log n) average, in-place sort"""
    if low < high:
        # DIVIDE: partition around a pivot; pivot lands in its final position
        pivot_idx = partition(arr, low, high)
        
        # CONQUER: recursively sort elements before and after pivot
        quick_sort(arr, low, pivot_idx - 1)
        quick_sort(arr, pivot_idx + 1, high)

def partition(arr, low, high):
    """Lomuto partition: use arr[high] as pivot, rearrange so
    elements < pivot are on the left, elements >= pivot are on the right."""
    pivot = arr[high]
    i = low - 1              # i marks the boundary of "elements < pivot"
    
    for j in range(low, high):
        if arr[j] < pivot:
            i += 1
            arr[i], arr[j] = arr[j], arr[i]  # Swap smaller element into left region
    
    arr[i + 1], arr[high] = arr[high], arr[i + 1]  # Place pivot in final position
    return i + 1             # Return pivot's final index

3. Maximum Subarray (Kadane Alternative)

This D and C approach splits the array in half and considers three cases: the maximum subarray is entirely in the left half, entirely in the right half, or it crosses the midpoint. The crossing case is handled by extending greedily from the midpoint in both directions. Time: O(n log n) — T(n) = 2T(n/2) + O(n). This is slower than Kadane’s O(n) algorithm, but interviewers sometimes specifically ask for the D and C approach to test your understanding of the paradigm. When the D and C version matters: The D and C structure generalizes to problems like “count inversions” and “count range sum” where Kadane’s greedy approach does not apply.
def max_subarray(nums, left, right):
    """Find max subarray sum using D and C"""
    if left == right:              # BASE CASE: single element
        return nums[left]
    
    mid = (left + right) // 2
    
    # CASE 1: max subarray entirely in the left half
    left_max = max_subarray(nums, left, mid)
    # CASE 2: max subarray entirely in the right half
    right_max = max_subarray(nums, mid + 1, right)
    # CASE 3: max subarray crosses the midpoint
    cross_max = max_crossing_sum(nums, left, mid, right)
    
    # COMBINE: answer is the best of all three cases
    return max(left_max, right_max, cross_max)

def max_crossing_sum(nums, left, mid, right):
    """Find max sum of a subarray that includes nums[mid] and nums[mid+1]"""
    # Extend leftward from mid -- find the best sum ending at mid
    left_sum = float('-inf')
    total = 0
    for i in range(mid, left - 1, -1):  # Walk backwards from mid
        total += nums[i]
        left_sum = max(left_sum, total)
    
    # Extend rightward from mid+1 -- find the best sum starting at mid+1
    right_sum = float('-inf')
    total = 0
    for i in range(mid + 1, right + 1):  # Walk forwards from mid+1
        total += nums[i]
        right_sum = max(right_sum, total)
    
    return left_sum + right_sum  # Combined crossing sum

4. Binary Tree Maximum Depth

class TreeNode:
    def __init__(self, val=0, left=None, right=None):
        self.val = val
        self.left = left
        self.right = right

def max_depth(root):
    """D and C on tree structure"""
    if not root:
        return 0
    
    # Divide and Conquer
    left_depth = max_depth(root.left)
    right_depth = max_depth(root.right)
    
    # Combine
    return 1 + max(left_depth, right_depth)

Classic Problems

ProblemPatternKey InsightTime
Merge SortClassic D and CDivide array, merge sorted halvesO(n log n)
Quick SortIn-place D and CPartition around pivotO(n log n) avg
Max SubarrayArray D and CHandle crossing subarrayO(n log n)
Closest Pair2D D and CDivide plane, check strip near boundaryO(n log n)
Binary SearchSimple D and CEliminate half each timeO(log n)
Count InversionsModified merge sortCount during merge stepO(n log n)
Kth Largest (Quickselect)Partial quick sortOnly recurse on the side containing kO(n) avg

Worked LeetCode Problems

These five problems cover the divide-and-conquer questions you should expect in interviews.

LC 912: Sort an Array (Merge Sort Implementation)

Problem: Sort an array of integers in ascending order. Implement your own (the interviewer wants merge sort or quicksort, not the language’s built-in). Why it matters: This is the canonical “implement merge sort from scratch” problem. Master this and you can adapt it for count-inversions, sort linked list, etc.
def sortArray(nums):
    if len(nums) <= 1:
        return nums
    mid = len(nums) // 2
    left = sortArray(nums[:mid])              # Recursively sort left half
    right = sortArray(nums[mid:])             # Recursively sort right half
    return merge(left, right)                 # Combine

def merge(left, right):
    result, i, j = [], 0, 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:               # `less-than-or-equal` keeps merge stable
            result.append(left[i]); i += 1
        else:
            result.append(right[j]); j += 1
    result.extend(left[i:])
    result.extend(right[j:])
    return result
# Time: O(n log n) all cases. Space: O(n) for merge buffers + O(log n) recursion.

LC 215: Kth Largest Element (Quickselect)

Problem: Return the kth largest element in an unsorted array. Average O(n). Why it matters: The classic “do not need full sort” example. Quickselect uses partitioning but recurses into ONE side only — the geometric series n + n/2 + n/4 + ... converges to 2n.
import random

def findKthLargest(nums, k):
    # Convert to "kth smallest from the left" = (len(nums) - k)th index in sorted order
    target = len(nums) - k
    
    def partition(l, r):
        # Randomized pivot to avoid O(n^2) on sorted input
        pivot_idx = random.randint(l, r)
        nums[pivot_idx], nums[r] = nums[r], nums[pivot_idx]
        pivot = nums[r]
        store = l                                    # Boundary of "less than pivot" region
        for i in range(l, r):
            if nums[i] < pivot:
                nums[i], nums[store] = nums[store], nums[i]
                store += 1
        nums[store], nums[r] = nums[r], nums[store]  # Place pivot in its final position
        return store
    
    l, r = 0, len(nums) - 1
    while True:                                      # Iterative to save stack
        p = partition(l, r)
        if p == target:
            return nums[p]
        elif p < target:
            l = p + 1
        else:
            r = p - 1
# Time: O(n) average, O(n^2) worst case. Space: O(1) iterative version.

LC 50: Pow(x, n)

Problem: Implement pow(x, n) for floating x and integer n. Why it matters: The “halve the exponent each call” trick is divide-and-conquer at its purest — T(n) = T(n/2) + O(1) = O(log n).
def myPow(x, n):
    if n == 0: return 1.0
    if n < 0: return 1.0 / myPow(x, -n)
    half = myPow(x, n // 2)
    return half * half if n % 2 == 0 else x * half * half
# Time: O(log n). Space: O(log n) recursion.
Critical edge case: In Java/C++, n = Integer.MIN_VALUE (-2^31) cannot be negated as a 32-bit int because 2^31 is not representable. Cast to long first.

LC 169: Majority Element (Divide-and-Conquer)

Problem: Find the element that appears more than n/2 times. Why it matters: Boyer-Moore voting is the O(n) one-pass solution and the expected answer. But the D and C version is what some interviewers want to test the paradigm. Both should be in your toolbox.
# Approach 1: Boyer-Moore voting (O(n) time, O(1) space) -- preferred for production
def majorityElement_bm(nums):
    count = 0
    candidate = None
    for x in nums:
        if count == 0:
            candidate = x
        count += 1 if x == candidate else -1
    return candidate

# Approach 2: Divide and Conquer (O(n log n) time)
def majorityElement_dc(nums):
    def helper(l, r):
        if l == r:
            return nums[l]
        mid = (l + r) // 2
        left_maj = helper(l, mid)
        right_maj = helper(mid + 1, r)
        # If both halves agree, we are done
        if left_maj == right_maj:
            return left_maj
        # Otherwise, the global majority is whichever appears more in [l, r]
        left_count = sum(1 for i in range(l, r + 1) if nums[i] == left_maj)
        right_count = sum(1 for i in range(l, r + 1) if nums[i] == right_maj)
        return left_maj if left_count > right_count else right_maj
    return helper(0, len(nums) - 1)

LC 53: Max Subarray (Divide-and-Conquer Variant)

Problem: Find the contiguous subarray with the largest sum. Why it matters: Kadane’s algorithm is O(n) and the gold standard. The D and C version is what interviewers ask when they specifically want to test the paradigm or set up a follow-up like “now adapt this to count inversions.”
def maxSubArray(nums):
    def helper(l, r):
        if l == r:
            return nums[l]
        mid = (l + r) // 2
        # Three cases: entirely left, entirely right, or crosses midpoint
        left_max = helper(l, mid)
        right_max = helper(mid + 1, r)
        cross_max = cross(l, mid, r)
        return max(left_max, right_max, cross_max)
    
    def cross(l, mid, r):
        # Best sum ENDING at mid (extending leftward)
        s, best_left = 0, float('-inf')
        for i in range(mid, l - 1, -1):
            s += nums[i]
            best_left = max(best_left, s)
        # Best sum STARTING at mid+1 (extending rightward)
        s, best_right = 0, float('-inf')
        for i in range(mid + 1, r + 1):
            s += nums[i]
            best_right = max(best_right, s)
        return best_left + best_right
    
    return helper(0, len(nums) - 1)
# Time: T(n) = 2T(n/2) + O(n) = O(n log n). Space: O(log n).
Honest take: Always mention to the interviewer that Kadane’s is O(n) and is what you would actually ship. The D and C version exists to teach the paradigm and to handle generalizations like LC 327 (Count Range Sum) where Kadane’s does not apply.

Common Mistakes

Avoid These Pitfalls:
  1. Forgetting the base case: Without it, recursion never terminates. For array problems, the base case is typically when the subarray has 0 or 1 elements.
  2. Off-by-one in the divide step: When splitting [left, right] at mid, make sure sub-calls use [left, mid] and [mid+1, right] (not [left, mid-1] and [mid, right]), or you risk infinite recursion when left == mid.
  3. Expensive combine step: If your merge step is O(n^2) instead of O(n), the total becomes O(n^2 log n) — worse than just doing it directly. Aim for a linear combine step.
  4. Not recognizing the pattern: If an interviewer says “can you do it in O(n log n)?” and the data is unsorted, think D and C. The n log n hint is the Master Theorem signature.
  5. Confusing D and C with DP: If your subproblems share state or overlap, you need memoization (DP). D and C subproblems must be independent.

Caveats and Traps

LeetCode Divide-and-Conquer Traps That Cost Real Submissions:
  1. Merge sort: merge step dominates — O(n log n) overall but O(n) extra space. Many candidates claim merge sort is O(1) space; it is not. The merge buffer is O(n). For in-place stable sort, the constant factors get ugly. If asked about space, the honest answer is O(n).
  2. Quicksort worst case is O(n^2) on already-sorted or reverse-sorted input when using a fixed pivot (last or first element). The standard fix is randomized pivot selection: pick a random index, swap to the partition position, then partition. This makes O(n^2) astronomically unlikely. If you do not mention this in an interview, expect it as a follow-up.
  3. Quickselect average O(n), worst O(n^2). Same pivot pathology as quicksort. Randomization handles it in expectation. For guaranteed O(n) worst case, the median-of-medians algorithm exists but has a 5x constant factor and is rarely used outside academic contexts. State this trade-off explicitly.
  4. Off-by-one in the divide step. Splitting [l, r] at mid = (l + r) // 2 and recursing on [l, mid] and [mid + 1, r] is correct. Recursing on [l, mid - 1] and [mid, r] causes infinite recursion when l == mid. Recursing on [l, mid] and [mid, r] overlaps and double-counts. Memorize the canonical form.
  5. Confusing D and C with DP. If your subproblems overlap (same state computed twice), it is DP — add memoization. If they are disjoint (each works on a separate slice of data), it is D and C — no memo needed. Adding a memo to D and C wastes memory; missing one in DP causes exponential blowup.
  6. Stack depth on huge inputs. A 1-million-element merge sort recurses about 20 levels deep — safe in any language. But poorly-balanced recursion (skewed quicksort due to bad pivot choice on adversarial input) can recurse n times — blows Python’s default 1000-frame limit. Always discuss randomized pivot or Introsort as a defense.
Solutions and Patterns That Avoid These Traps:
  1. Randomized pivot template (defends against O(n^2) worst case):
    import random
    pivot_idx = random.randint(l, r)
    nums[pivot_idx], nums[r] = nums[r], nums[pivot_idx]
    
  2. Introsort hybrid (used by C++‘s std::sort, Rust’s slice::sort_unstable, Java’s Arrays.sort for primitives): start with quicksort, monitor recursion depth, fall back to heapsort if depth exceeds 2 * log2(n). Guarantees O(n log n) worst case while keeping quicksort’s cache-friendly speed in the common case.
  3. The canonical divide step:
    mid = (l + r) // 2
    recurse(l, mid)
    recurse(mid + 1, r)
    
    Memorize this. Any other split causes off-by-one or overlap bugs.
  4. The “T(n) = aT(n/b) + O(n^d)” Master Theorem cheat:
    • If d less-than log_b(a): T(n) is O(n^log_b(a)). Example: Karatsuba multiplication, T(n) = 3T(n/2) + O(n) = O(n^log2_3) approximately O(n^1.58).
    • If d equals log_b(a): T(n) is O(n^d * log n). Example: merge sort, T(n) = 2T(n/2) + O(n) = O(n log n).
    • If d greater-than log_b(a): T(n) is O(n^d). Example: T(n) = 2T(n/2) + O(n^2) = O(n^2).
  5. Quickselect pattern for kth-element problems: partition, then recurse only into the side containing k. The geometric series gives O(n) average. For LC 215, LC 973 (k closest), LC 347 (top k frequent — though heap is more common).

Curated LeetCode Practice List

Grouped by difficulty, with annotations.

Easy (Warm-up)

LC#ProblemWhat It Teaches
169Majority ElementD and C variant; Boyer-Moore is the production answer.
53Maximum SubarrayKadane vs D and C trade-off.
108Convert Sorted Array to BSTPick midpoint as root; recurse on halves.

Medium (LeetCode bread-and-butter)

LC#ProblemWhat It Teaches
912Sort an ArrayImplement merge sort from scratch.
215Kth Largest Element in an ArrayQuickselect; geometric series gives O(n) avg.
50Pow(x, n)Halve the exponent; O(log n).
240Search a 2D Matrix IIEliminate row/col from corner; O(m + n).
973K Closest Points to OriginQuickselect on distance squared.
241Different Ways to Add ParenthesesSplit at each operator; combine results.
427Construct Quad TreeRecursive grid splitting into 4 quadrants.

Hard (Stretch)

LC#ProblemWhat It Teaches
23Merge k Sorted ListsPairwise merge gives O(N log k).
315Count of Smaller Numbers After SelfModified merge sort counts inversions.
493Reverse PairsVariant: count pairs where i less than j and nums[i] greater than 2*nums[j].
327Count of Range SumMerge sort on prefix sums + range counting.
4Median of Two Sorted ArraysBinary search on split point; O(log min(m,n)).
218The Skyline ProblemDivide buildings, merge skylines.

Practice Problems

Count Inversions

Modified merge sort

Median of Two Arrays

Binary search approach

Kth Largest

Quick select algorithm

Search 2D Matrix II

Eliminate rows/columns

Interview Questions

1. Explain the three steps of Divide and Conquer. What distinguishes it from Dynamic Programming?

What interviewers are really testing: Whether you have a crisp mental model and can articulate the boundary between D&C and DP. Many candidates blur the two, which signals shallow understanding of both paradigms. Strong Answer:
  • Every D&C algorithm follows three steps: Divide (split the problem into smaller instances of the same problem), Conquer (solve each sub-instance recursively, with a base case for trivial inputs), and Combine (merge sub-solutions into the answer for the original problem).
  • The key distinction from DP: D&C subproblems do not overlap. Each sub-instance works on a disjoint portion of the data. Merge sort splits the array in half — left and right halves share no elements. DP subproblems do overlap — computing Fibonacci(5) requires Fibonacci(4) and Fibonacci(3), and Fibonacci(4) also requires Fibonacci(3). DP uses memoization to avoid recomputation; D&C does not need it because no recomputation occurs.
  • If you write a D&C solution and find yourself solving the same subproblem multiple times, you either need to add memoization (turning it into DP) or rethink the divide strategy to make subproblems disjoint.
  • The Master Theorem gives you the complexity of most D&C recurrences: T(n) = aT(n/b) + O(n^d). For merge sort, a=2, b=2, d=1, giving O(n log n). Knowing this lets you predict the complexity of a D&C approach before implementing it.
Red flag answer: “D&C and DP are basically the same thing — both solve subproblems.” This misses the critical overlap distinction. Another red flag is not knowing the Master Theorem or being unable to state the recurrence for a simple D&C algorithm. Follow-ups:
  1. Can you give an example of a problem where a D&C approach leads to overlapping subproblems, so you must switch to DP?
  2. What is the Master Theorem? State the three cases and give an example for each.

2. Why is Merge Sort O(n log n) in all cases, while Quick Sort can degrade to O(n^2)?

What interviewers are really testing: Whether you understand how the divide step’s balance affects complexity and can reason about worst-case inputs. This is the most important comparison in the sorting/D&C domain. Strong Answer:
  • Merge Sort always divides the array exactly in half, regardless of the data. This guarantees log n levels of recursion, and each level does O(n) merge work. Total: O(n log n) always — best, average, and worst case.
  • Quick Sort’s divide step (partitioning) depends on the pivot choice. If the pivot happens to be the median, we get two equal halves and O(n log n). But if the pivot is consistently the smallest or largest element, one partition has n-1 elements and the other has 0. This gives n levels of recursion with O(n) work each: O(n^2).
  • The worst case for Quick Sort with last-element pivot is an already-sorted array: each partition picks the last (largest) element, the left partition has n-1 elements, and the right partition is empty. Recursion depth becomes n instead of log n.
  • Mitigation: randomized pivot selection (pick a random index, swap to the pivot position) makes the expected number of comparisons about 1.39n log n with astronomically low probability of hitting O(n^2). Alternatively, “median of three” (pick median of first, middle, last) avoids the sorted-input pathology.
Red flag answer: “Quick Sort is O(n log n), so it is always fast.” Ignoring the worst case shows a lack of rigor. Another red flag is not knowing what input triggers worst-case Quick Sort. Follow-ups:
  1. Introsort monitors recursion depth and switches to Heap Sort when it exceeds 2*log(n). Why is this the threshold?
  2. Quick Sort is usually faster than Merge Sort in practice despite the same O(n log n) average. Why? What hardware factor explains this?

3. Solve “Maximum Subarray Sum” using Divide and Conquer. Why is this approach O(n log n) instead of Kadane’s O(n)?

What interviewers are really testing: Whether you can implement the crossing-subarray logic correctly and understand the recurrence that produces O(n log n). Bonus: they want to hear you acknowledge that Kadane is better for this specific problem. Strong Answer:
  • Split the array at the midpoint. The maximum subarray is in one of three places: entirely in the left half, entirely in the right half, or crossing the midpoint. Recursively solve left and right. The crossing case extends greedily from the midpoint leftward and rightward, taking the best sum in each direction.
  • The crossing case is O(n) because we scan from mid to left and mid+1 to right. Combined with two recursive calls on half-size arrays, the recurrence is T(n) = 2T(n/2) + O(n), which by the Master Theorem is O(n log n).
  • Kadane’s algorithm (single pass, track running max ending at each position) is O(n) because it avoids the divide step entirely. For the maximum subarray problem specifically, Kadane is strictly better.
  • So why learn the D&C approach? Because the D&C structure generalizes to problems where Kadane does not apply: “Count Inversions” (count pairs where a later element is smaller), “Count Range Sum” (count subarrays with sum in a range), and “Closest Pair of Points.” These problems need the merge step to count cross-boundary interactions.
Red flag answer: “Kadane is better so I would never use D&C for this.” While technically correct for this problem, it misses the point — the interviewer is testing D&C skill, not asking for the optimal solution. Refusing to engage with the paradigm is a red flag. Follow-ups:
  1. Walk me through the crossing-subarray calculation for [-2, 1, -3, 4, -1, 2, 1, -5, 4] when the midpoint splits between -1 and 2.
  2. How would you modify this D&C approach to count the number of subarrays with sum in a given range [lower, upper]?

4. How does “Count Inversions” use a modified Merge Sort? Walk through the logic.

What interviewers are really testing: Whether you can augment a standard D&C algorithm with additional bookkeeping. This is the bridge between textbook D&C and real interview problems. Strong Answer:
  • An inversion is a pair (i, j) where i < j but nums[i] > nums[j]. The brute force checks all O(n^2) pairs. Modified merge sort counts inversions during the merge step, achieving O(n log n).
  • During the merge of two sorted halves, when we pick an element from the right half over the left half, it means the right element is smaller than all remaining elements in the left half. So we add len(left) - i to the inversion count, where i is the current pointer in the left half.
  • Example: merging [1, 3, 5] and [2, 4, 6]. When we pick 2 from the right, elements 3 and 5 in the left are both greater than 2 and appear before it — that is 2 inversions. When we pick 4, only 5 remains in the left half — 1 inversion. Total inversions from this merge: 3.
  • The total inversion count is the sum of inversions counted at every merge step across all recursion levels. Each level does O(n) total work, and there are O(log n) levels, giving O(n log n) overall.
Red flag answer: “Check every pair and count.” That is O(n^2). A subtler red flag is implementing the merge correctly but miscounting inversions — for example, counting 1 inversion per right-pick instead of len(left) - i. Follow-ups:
  1. How is “Count of Smaller Numbers After Self” related to counting inversions? Can you use the same merge sort technique?
  2. Could you use a Binary Indexed Tree (BIT) instead of merge sort to count inversions? What is the trade-off?

5. Explain Quick Select (finding the kth element). Why is its average time O(n) while Quick Sort is O(n log n)?

What interviewers are really testing: Whether you understand that you can discard half the work at each step by only recursing on one side. This is the core insight that separates selection from sorting. Strong Answer:
  • Quick Select uses Quick Sort’s partition step, but only recurses on the side containing the kth element. After partitioning, the pivot is at position p. If p == k, return the pivot. If k < p, recurse on the left partition. If k > p, recurse on the right partition.
  • Quick Sort recurses on BOTH halves: T(n) = 2T(n/2) + O(n) = O(n log n). Quick Select recurses on ONE half: T(n) = T(n/2) + O(n). The geometric series n + n/2 + n/4 + ... = 2n = O(n).
  • Worst case is still O(n^2) — the same bad-pivot problem as Quick Sort. With randomized pivot, the expected time is O(n) with very high probability.
  • The “median of medians” algorithm guarantees O(n) worst case by choosing a pivot that is guaranteed to eliminate at least 30% of elements. However, the constant factor is about 5x larger, so randomized Quick Select is almost always used in practice.
Red flag answer: “Quick Select sorts the array and picks the kth element.” That is O(n log n) and defeats the purpose. Another red flag is not understanding the geometric series that proves O(n) average. Follow-ups:
  1. In the average case, Quick Select does about 3.4n comparisons. Where does the constant 3.4 come from?
  2. How does C++‘s std::nth_element relate to Quick Select? What guarantees does it provide?

6. When an interviewer hints “can you solve this in O(n log n)?” and the input is unsorted, what should your first thought be?

What interviewers are really testing: Pattern recognition and the meta-skill of mapping complexity hints to algorithm paradigms. This is an interviewing-about-interviewing question. Strong Answer:
  • O(n log n) on unsorted data is the Master Theorem signature for D&C: T(n) = 2T(n/2) + O(n). My first thought is “can I split this problem in half, solve each half, and combine in O(n)?”
  • The second thought: sorting itself is O(n log n), so maybe sorting is the preprocessing step and the actual algorithm is a linear scan. “Sort first, then greedy” solves a huge class of problems (merge intervals, meeting rooms, task scheduling, two-sum variants on sorted arrays).
  • Third thought: if the problem involves searching or counting across a data structure, an O(n log n) approach might mean building a balanced BST, a segment tree, or a BIT and querying n times at O(log n) each.
  • The key meta-skill: when the interviewer gives you a target complexity, work backward. O(n log n) means either D&C or sort-then-scan. O(n) means hash map, two pointers, or greedy. O(log n) means binary search. Matching the hint to the paradigm narrows your search space dramatically.
Red flag answer: “I would try every approach until one matches.” This shows no strategic thinking. The ability to map target complexity to algorithm families is what separates prepared candidates from unprepared ones. Follow-ups:
  1. What if the hint is O(n * sqrt(n))? What algorithm family does that suggest?
  2. Name three different problems that all have O(n log n) solutions using fundamentally different techniques.

7. Binary search is technically Divide and Conquer. Explain why, and what makes it special compared to other D&C algorithms.

What interviewers are really testing: Whether you see the D&C structure in binary search and understand what makes it degenerate: the combine step is trivial and only one branch is explored. Strong Answer:
  • Binary search is D&C with two simplifications. Divide: compare the target with the middle element. Conquer: recurse on the left or right half (only one, not both). Combine: trivial — the answer from the sub-instance is the answer for the full instance; no merging needed.
  • The recurrence is T(n) = T(n/2) + O(1), giving O(log n). Compare this to merge sort’s T(n) = 2T(n/2) + O(n) = O(n log n). Binary search only recurses on one half and does O(1) work per level — two multiplicative savings.
  • What makes binary search special: it requires the input to satisfy a monotonic property (sorted, or more generally, a predicate that partitions the array into true/false regions). This precondition is what allows us to discard half the search space at each step — we know the answer cannot be in the discarded half.
  • The D&C lens also explains why binary search generalizes beyond sorted arrays: any problem where you can “divide and eliminate” (not “divide and conquer both”) fits this pattern — bisecting on answer (binary search on answer space), finding peak element, searching in rotated sorted array.
Red flag answer: “Binary search is not D&C — it is just a loop.” While binary search is often implemented iteratively, its logical structure is D&C. Not recognizing this means the candidate cannot generalize the pattern. Another red flag is thinking binary search only works on sorted arrays and not knowing about binary search on answer space. Follow-ups:
  1. Give an example of “binary search on the answer” where you are not searching through an array at all.
  2. What is the difference between lower_bound and upper_bound in binary search? When do you need each?

Interview Deep-Dive

Strong Answer:
  • O(n log n) appears in three distinct algorithmic families, and matching the hint to the right family is a meta-skill that saves minutes in interviews.
  • Divide and Conquer: The Master Theorem signature is T(n) = 2T(n/2) + O(n), giving O(n log n). Use D&C when the problem involves splitting data into halves, solving each half independently, and combining in linear time. Signals: “count inversions,” “closest pair of points,” “maximum crossing subarray.” The combine step is where the real work happens.
  • Sorting plus linear scan: Many O(n log n) problems are “sort first, then greedy O(n).” Signals: “merge intervals,” “meeting rooms,” “task scheduling,” “two-sum variants on sorted data.” If the problem involves intervals, ranges, or ordering, sorting is likely the first step.
  • BST/Segment tree/BIT with n queries: When you need n operations on a data structure that supports O(log n) per query. Signals: “count of elements less than X seen so far,” “range sum queries,” “sliding window maximum.” Each element triggers one O(log n) query, totaling O(n log n).
  • Decision process: (1) Does the problem involve splitting and combining? Try D&C. (2) Does sorting simplify the problem to a linear scan? Try sort-then-scan. (3) Does each element need a logarithmic-time query against previously seen data? Try a tree-based structure.
Follow-up: Can you name a problem that has both a D&C solution and a sorting-based solution at O(n log n), and explain the trade-offs?
  • “Count inversions” can be solved by modified merge sort (D&C, O(n log n)) or by inserting elements into a BIT from right to left and querying prefix sums (O(n log n)). The merge sort approach is elegant and teaches D&C augmentation. The BIT approach is more flexible — it generalizes to “count elements in a range” problems. In an interview, merge sort is the expected answer; the BIT approach earns bonus points for showing breadth.
Strong Answer:
  • An inversion is a pair (i, j) where i less than j but nums[i] greater than nums[j]. During the merge of two sorted halves, when we pick an element from the right half, it means this right element is smaller than all remaining elements in the left half. Each of those remaining left elements forms an inversion with the right element.
  • Why len(left) - i: At the moment we pick right[j], the left pointer is at position i. Elements left[i], left[i+1], ..., left[len(left)-1] are all greater than right[j] (because the left half is sorted and we chose right[j] as smaller than left[i]). The count of inversions contributed is len(left) - i.
  • Concrete example: Merging left = [1, 3, 5] and right = [2, 4, 6].
    • Compare left[0]=1 vs right[0]=2. Pick 1 (from left). No inversions.
    • Compare left[1]=3 vs right[0]=2. Pick 2 (from right). Inversions: left[1]=3 and left[2]=5 are both greater than 2. Count += 3 - 1 = 2.
    • Compare left[1]=3 vs right[1]=4. Pick 3 (from left). No inversions.
    • Compare left[2]=5 vs right[1]=4. Pick 4 (from right). Inversions: left[2]=5 is greater than 4. Count += 3 - 2 = 1.
    • Compare left[2]=5 vs right[2]=6. Pick 5 (from left). No inversions.
    • Pick 6 (from right). No remaining left elements. Count += 0.
    • Total inversions from this merge: 2 + 1 = 3.
  • Common mistake: Counting 1 inversion per right-pick instead of len(left) - i. This undercounts because one right element can be inverted with multiple left elements simultaneously.
  • Total complexity: O(n log n) — same as merge sort. Each level of recursion does O(n) work (merging + counting), and there are O(log n) levels.
Follow-up: Could you use a Binary Indexed Tree (BIT) to count inversions instead of merge sort? What is the trade-off?
  • Yes. Process the array from right to left. For each element, query the BIT for the count of elements smaller than the current element that have already been inserted (these are elements to the right that are smaller — inversions). Then insert the current element into the BIT.
  • Trade-off: BIT approach is O(n log M) where M is the value range (requires coordinate compression to keep M reasonable). Merge sort is O(n log n) regardless of values. BIT is more flexible for variants (e.g., count elements in a specific range), while merge sort is purer D&C and easier to explain for this specific problem.
Strong Answer:
  • After partitioning around a pivot, the kth element is in one of the two partitions. Quick Select only recurses into the partition containing k — discarding the other entirely.
  • Average case analysis: Assuming the pivot splits the array roughly in half (which happens on average with random pivot selection), the work at each level is: n (partition the full array), then n/2 (partition the half containing k), then n/4, then n/8, and so on. The sum is n + n/2 + n/4 + … = n * (1 + 1/2 + 1/4 + …) = n * 2 = 2n = O(n).
  • Why the series converges: The geometric series with ratio 1/2 converges to 2. Each level does half the work of the previous level, so total work is bounded by twice the first level’s work.
  • Worst-case O(n^2) trigger: If the pivot is consistently the smallest or largest element, one partition has n-1 elements and the other has 0. Quick Select recurses on n-1 elements, then n-2, then n-3… The sum is n + (n-1) + (n-2) + … + 1 = n(n+1)/2 = O(n^2). This happens with already-sorted input when using the last element as pivot.
  • Mitigation: Randomized pivot selection makes the expected time O(n) with overwhelmingly high probability. The “median of medians” algorithm guarantees O(n) worst case by choosing a pivot that eliminates at least 30% of elements, but it has a constant factor of ~5x, so randomized Quick Select is preferred in practice.
  • The exact expected comparisons: For randomized Quick Select, the expected number of comparisons is approximately 3.4n (derivable from the recurrence with harmonic numbers).
Follow-up: C++ has std::nth_element which is based on Quick Select. What guarantees does it provide, and how does Introsort relate?
  • std::nth_element guarantees O(n) average case and partially sorts the array: the element at position k is the kth smallest, all elements before it are less than or equal, and all elements after are greater than or equal. It does not sort either partition.
  • Most implementations use Introselect, which starts with Quick Select but falls back to median-of-medians after too many bad pivots, guaranteeing O(n) worst case. This is analogous to Introsort (Quick Sort + Heap Sort fallback) for sorting.

LeetCode Interview Q&A (Divide and Conquer)

These three are the LeetCode-style D and C questions you should be able to walk through cold. Each has a strong-answer framework, a worked example with full code, three senior follow-ups, common wrong answers, and pointers to related problems.
Strong Answer Framework:
  1. State the recurrence: T(n) = 2T(n/2) + O(n). Master Theorem case 2 yields O(n log n).
  2. Walk through three pieces: divide (split at midpoint), conquer (recurse on each half), combine (merge in linear time using two pointers).
  3. Discuss space: O(n) for the merge buffer + O(log n) recursion. Stable sort.
  4. Compare to quicksort: O(n log n) average vs guaranteed; O(n) extra space vs O(log n); stable vs unstable.
Real LeetCode Example — Full Walkthrough:
def sortArray(nums):
    if len(nums) <= 1:
        return nums                              # Base case: already sorted
    mid = len(nums) // 2
    left = sortArray(nums[:mid])                 # Divide + conquer
    right = sortArray(nums[mid:])
    return merge(left, right)                    # Combine

def merge(left, right):
    result, i, j = [], 0, 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:                  # `less-than-or-equal` keeps stability
            result.append(left[i]); i += 1
        else:
            result.append(right[j]); j += 1
    # Append any remaining elements (only one of these has work)
    result.extend(left[i:])
    result.extend(right[j:])
    return result
For nums = [5, 2, 8, 1, 9, 3]:
  • Split into [5, 2, 8] and [1, 9, 3].
  • Recursively sort: [2, 5, 8] and [1, 3, 9].
  • Merge: compare 2 vs 1, take 1. Compare 2 vs 3, take 2. Compare 5 vs 3, take 3. Compare 5 vs 9, take 5. Compare 8 vs 9, take 8. Append 9. Result: [1, 2, 3, 5, 8, 9].
Complexity analysis:
  • Time: Each level of recursion does O(n) total work (every merge across all subproblems sums to n). There are log n levels. Total: O(n log n) — best, worst, AND average case.
  • Space: O(n) for the temporary buffers + O(log n) for the recursion stack. Total: O(n).
  • Stability: Yes — equal elements preserve relative order because we use less-than-or-equal (left wins ties).
Senior Follow-up 1 — “Why is merge sort preferred over quicksort for sorting linked lists?” Linked lists do not support O(1) random access, which kills quicksort’s partition step (it relies on array indexing). Merge sort, on the other hand, only needs sequential access — splitting at the midpoint can be done with a slow-fast pointer, and merging walks linearly. Plus, merge sort can be done in-place on linked lists with O(1) extra space (just rewire the pointers), unlike for arrays. Java’s Collections.sort uses Timsort (a merge-sort variant); for linked lists, merge sort is the canonical answer.
Senior Follow-up 2 — “Can merge sort be done in-place on arrays?” In-place merge sort exists but is messy. Standard merge requires O(n) auxiliary space. The “in-place” variants either use complex block-swap tricks (Practical In-Place Merge Sort by Kim and Kutzner) or sacrifice stability. Trade-off: you save O(n) space at the cost of much higher constant factors (often 2-3x slower in wall-clock). Production sorts (Timsort, std::stable_sort) use O(n/2) auxiliary space as a reasonable compromise. For interviews, mention the existence of in-place variants but stick with the O(n)-space version unless asked otherwise.
Senior Follow-up 3 — “How does Timsort improve on plain merge sort?” Timsort exploits two observations: (1) real-world data often has pre-existing sorted runs (“natural runs”), and (2) merging short sequences is wasteful. It scans for natural runs, extends each to a minimum length using insertion sort, and merges runs in a stack-based pattern that maintains a careful balance. Result: O(n) on already-sorted data (instead of O(n log n) for plain merge sort), O(n log n) worst case, and excellent cache behavior. Used by Python (sorted, list.sort), Java (Arrays.sort for objects), Rust, Android, V8.
Common Wrong Answers:
  • Claiming merge sort is O(1) space — it is not; the merge buffer is O(n).
  • Using less-than instead of less-than-or-equal in the merge comparison — breaks stability when there are duplicates. Equal elements from the right would jump ahead of equal elements from the left.
  • Recursing as mergeSort(nums[:mid]) and mergeSort(nums[mid+1:]) (skipping nums[mid]) — you lose an element. The correct split is [:mid] and [mid:].
Further Reading:
Strong Answer Framework:
  1. Frame the problem: “Kth largest” is “(n-k)th smallest from the left in sorted order.” Quickselect partitions around a pivot, finds where the pivot lands, and recurses ONLY into the side containing the target.
  2. State the geometric series: T(n) = T(n/2) + O(n) = O(n) average. Sum is n + n/2 + n/4 + ... = 2n.
  3. Worst case: T(n) = T(n-1) + O(n) = O(n^2) when pivot is consistently extreme. Random pivot reduces this to vanishingly small probability.
  4. For guaranteed O(n) worst case, mention median-of-medians (5x constant factor; rarely worth it).
  5. Compare to alternatives: heap-based O(n log k), full sort O(n log n).
Real LeetCode Example — Full Walkthrough:
import random

def findKthLargest(nums, k):
    target = len(nums) - k                       # Convert to 0-indexed kth smallest
    
    def partition(l, r):
        # Randomized pivot defends against O(n^2) on adversarial input
        pi = random.randint(l, r)
        nums[pi], nums[r] = nums[r], nums[pi]
        pivot = nums[r]
        store = l                                # Boundary of "less than pivot" region
        for i in range(l, r):
            if nums[i] < pivot:
                nums[i], nums[store] = nums[store], nums[i]
                store += 1
        nums[store], nums[r] = nums[r], nums[store]  # Place pivot at boundary
        return store                             # Return pivot's final index
    
    l, r = 0, len(nums) - 1
    while True:                                  # Iterative -- saves recursion stack
        p = partition(l, r)
        if p == target:
            return nums[p]
        elif p < target:
            l = p + 1                            # Target is to the right
        else:
            r = p - 1                            # Target is to the left
For nums = [3, 2, 1, 5, 6, 4], k = 2:
  • target = 6 - 2 = 4 (we want the element at index 4 in sorted order, which is the 2nd largest).
  • Suppose random pivot is 4 (last element). Partition: [3, 2, 1] [4] [5, 6]. Pivot lands at index 3.
  • 3 less than 4, so search right half [5, 6]. Target index 4 in original.
  • Partition [5, 6] with pivot 6: [5] [6]. Pivot lands at index 5.
  • 5 greater than 4, so search left. New range is just index 4 (5).
  • Pivot 5 alone: lands at index 4. Found! Return 5.
Complexity:
  • Best case: O(n). Pivot lands near target on first try.
  • Average case: O(n) — the geometric series.
  • Worst case: O(n^2) without randomization (sorted input + last-element pivot). With randomization, occurs with vanishingly small probability.
Senior Follow-up 1 — “Where exactly does the geometric series come from?” Assume pivot splits roughly in half on average. Work at the first partition is n. After picking which side to recurse into, the next partition is on n/2 elements — work is n/2. Then n/4, then n/8. Sum: n + n/2 + n/4 + ... = n * 2 = O(n). The key insight: at each level we do half the work of the previous level, and we only recurse into ONE side, not both. This is what separates Quickselect (O(n)) from Quicksort (O(n log n)).
Senior Follow-up 2 — “Compare Quickselect to a min-heap-of-size-k approach for this problem.” Heap approach: maintain a min-heap of size k. Insert each element; if heap size exceeds k, pop the smallest. After processing all n elements, the heap’s root is the kth largest. Complexity: O(n log k) time, O(k) space. Quickselect is O(n) average, O(1) extra space. So Quickselect wins for k near n/2; heap wins for very small k (say k=10 in a 10-million-element stream because cache behavior is dominated by stream access). Heap also handles the streaming case (data arrives one at a time and must be processed online); Quickselect requires the whole array up front.
Senior Follow-up 3 — “What is median-of-medians and why is it rarely used in practice?” Median-of-medians is an algorithm that picks a pivot guaranteed to eliminate at least 30% of elements, giving worst-case O(n) for Quickselect. The procedure: split into groups of 5, find each group’s median by sorting, recursively find the median of those medians, use that as the pivot. The overhead is roughly 5x compared to randomized Quickselect. It is used in std::nth_element implementations as a fallback (Introselect), but rarely as the primary algorithm. For interview purposes, mention it for credit and explain that randomized Quickselect is preferred unless you need worst-case guarantees (e.g., real-time systems).
Common Wrong Answers:
  • Recursing into BOTH partitions like quicksort — that is sorting, not selecting. Correct Quickselect picks one side.
  • Using last-element pivot without randomization — TLE on already-sorted test cases, which LeetCode includes adversarially.
  • Confusing “kth largest” with “kth smallest” and indexing wrong. Always verify: target = len(nums) - k for kth largest.
Further Reading:
Strong Answer Framework:
  1. State the naive O(n) approach and acknowledge it is too slow for n up to 2^31.
  2. State the recurrence: x^n = (x^(n/2))^2 for even n, else x * (x^(n/2))^2. T(n) = T(n/2) + O(1) = O(log n).
  3. Handle three edge cases: n equals 0, n is negative, n is Integer.MIN_VALUE (overflow when negating in 32-bit).
  4. Mention the iterative O(1)-space variant using bitwise exponent traversal.
Real LeetCode Example — Full Walkthrough:
def myPow(x, n):
    # Edge case 1: zero exponent
    if n == 0:
        return 1.0
    # Edge case 2: negative exponent. In Python, integer arithmetic is unbounded
    # so -n cannot overflow. In Java/C++, cast to long first.
    if n < 0:
        return 1.0 / myPow(x, -n)
    # Recursive case: halve the exponent
    half = myPow(x, n // 2)
    if n % 2 == 0:
        return half * half                  # Even: x^n = (x^(n/2))^2
    else:
        return x * half * half              # Odd: x^n = x * (x^(n/2))^2
For myPow(2.0, 10):
  • myPow(2, 10) -> half = myPow(2, 5), return half * half.
  • myPow(2, 5) -> half = myPow(2, 2), return 2 * half * half.
  • myPow(2, 2) -> half = myPow(2, 1), return half * half.
  • myPow(2, 1) -> half = myPow(2, 0), return 2 * half * half = 2.
  • myPow(2, 0) -> return 1.
  • Unwind: myPow(2,1)=2, myPow(2,2)=4, myPow(2,5)=2*4*4=32, myPow(2,10)=32*32=1024. Correct.
Critical optimization — avoid duplicate recursive calls:
# WRONG -- doubles the work because myPow is called twice
return myPow(x, n//2) * myPow(x, n//2)

# RIGHT -- compute once, multiply
half = myPow(x, n//2)
return half * half
The wrong version is O(n) (no halving advantage) because each call recurses into TWO calls. The right version is O(log n).
Senior Follow-up 1 — “In Java, what happens when n equals Integer.MIN_VALUE (-2^31)?” -n overflows because +2^31 is not representable as a 32-bit signed int. The result of -Integer.MIN_VALUE is Integer.MIN_VALUE again (still negative). Three fixes: (a) cast to long before negating: long N = n; if (N less-than 0) { x = 1/x; N = -N; }; (b) special-case it: if (n == Integer.MIN_VALUE) return myPow(x, n+1) / x;; (c) use Python where integers are arbitrary precision and this never happens. Java/C++ interviewers love this question.
Senior Follow-up 2 — “Implement this iteratively in O(1) space using bit manipulation.”
def myPow(x, n):
    if n less-than 0: x, n = 1/x, -n
    result = 1.0
    while n greater-than 0:
        if n bitwise-AND 1:
            result *= x         # Bit is set; multiply current x into result
        x *= x                  # Square x
        n approx-shift-right-by-1
    return result
This walks through the bits of n. For n=13 = 0b1101, we multiply x^1 (bit 0), x^4 (bit 2), x^8 (bit 3) — giving x^13. Same recurrence, no recursion, O(1) space. Bonus: this is exactly how modular exponentiation in cryptography (RSA) is implemented.
Senior Follow-up 3 — “How is fast exponentiation used in matrix exponentiation for Fibonacci in O(log n)?” Replace x with a 2x2 matrix [[1, 1], [1, 0]]. Computing the matrix raised to the nth power gives [[F(n+1), F(n)], [F(n), F(n-1)]]. Using fast exponentiation, this is O(log n) with constant matrix multiplication cost. So Fibonacci can be computed in O(log n) instead of O(n). The same trick works for any linear recurrence (Tribonacci, characteristic polynomial recurrences). This is a favorite “bonus” follow-up at quant trading interviews.
Common Wrong Answers:
  • for i in range(n): result *= x — O(n), TLE for large n.
  • Calling myPow(x, n//2) twice in the recursion — defeats the divide-and-conquer benefit, becomes O(n).
  • Forgetting the negative-n case entirely — LeetCode tests it.
  • Forgetting that 0^0 should return 1 by convention.
Further Reading:
  • LC 372 Super Pow — modular exponentiation with the exponent given as an array of digits.
  • LC 1922 Count Good Numbers — direct application of pow_mod.
  • LC 70 Climbing Stairs — can be solved in O(log n) using matrix exponentiation as a bonus.