Documentation Index
Fetch the complete documentation index at: https://resources.devweekends.com/llms.txt
Use this file to discover all available pages before exploring further.
What is Divide and Conquer?
Divide and Conquer splits a problem into smaller subproblems, solves them recursively, and combines results. Unlike DP, subproblems are typically non-overlapping. The analogy is managing a large team project. You do not try to do everything yourself. Instead, you split the work into independent pieces, delegate each piece to a sub-team, and then combine their results into the final deliverable. If each sub-team does the same thing recursively, you have divide and conquer. The three steps are always the same:- Divide: Break the problem into smaller instances of the same problem.
- Conquer: Solve each sub-instance recursively (base case handles trivially small instances).
- Combine: Merge sub-solutions into the solution for the original problem.
When to Use
Sorting
Searching
Tree Problems
Array Problems
The D and C Template
Every D and C solution follows this exact skeleton. The art is in choosing how to split and how to combine.Pattern Variations
1. Merge Sort
The quintessential D and C algorithm. Divide the array in half, recursively sort each half, then merge the two sorted halves. The merge step is where comparisons happen — it walks two pointers across the sorted halves and always picks the smaller element. Time: O(n log n) in all cases (worst, average, best). Space: O(n) for the temporary merge array. Stable: equal elements preserve their relative order — this is why merge sort is preferred when stability matters. Interview insight: Merge sort’s merge step is reused in many problems: count inversions, count smaller numbers after self, sort linked lists. Recognizing “modified merge sort” is a valuable skill.2. Quick Sort
Quick sort works differently from merge sort: all the work happens in the divide step (partitioning), not the combine step. After partitioning, the pivot is in its final position, and elements to its left are smaller while elements to its right are larger. Time: O(n log n) average, O(n^2) worst case (when the pivot is consistently the smallest or largest element — e.g., already sorted array with last-element pivot). Space: O(log n) average for the recursion stack. Not stable. Interview tip: If an interviewer asks about worst-case mitigation, mention randomized pivot selection (pick a random index, swap it tohigh before partitioning). This makes the O(n^2) case astronomically unlikely.
3. Maximum Subarray (Kadane Alternative)
This D and C approach splits the array in half and considers three cases: the maximum subarray is entirely in the left half, entirely in the right half, or it crosses the midpoint. The crossing case is handled by extending greedily from the midpoint in both directions. Time: O(n log n) — T(n) = 2T(n/2) + O(n). This is slower than Kadane’s O(n) algorithm, but interviewers sometimes specifically ask for the D and C approach to test your understanding of the paradigm. When the D and C version matters: The D and C structure generalizes to problems like “count inversions” and “count range sum” where Kadane’s greedy approach does not apply.4. Binary Tree Maximum Depth
Classic Problems
| Problem | Pattern | Key Insight | Time |
|---|---|---|---|
| Merge Sort | Classic D and C | Divide array, merge sorted halves | O(n log n) |
| Quick Sort | In-place D and C | Partition around pivot | O(n log n) avg |
| Max Subarray | Array D and C | Handle crossing subarray | O(n log n) |
| Closest Pair | 2D D and C | Divide plane, check strip near boundary | O(n log n) |
| Binary Search | Simple D and C | Eliminate half each time | O(log n) |
| Count Inversions | Modified merge sort | Count during merge step | O(n log n) |
| Kth Largest (Quickselect) | Partial quick sort | Only recurse on the side containing k | O(n) avg |
Worked LeetCode Problems
These five problems cover the divide-and-conquer questions you should expect in interviews.LC 912: Sort an Array (Merge Sort Implementation)
Problem: Sort an array of integers in ascending order. Implement your own (the interviewer wants merge sort or quicksort, not the language’s built-in). Why it matters: This is the canonical “implement merge sort from scratch” problem. Master this and you can adapt it for count-inversions, sort linked list, etc.LC 215: Kth Largest Element (Quickselect)
Problem: Return the kth largest element in an unsorted array. Average O(n). Why it matters: The classic “do not need full sort” example. Quickselect uses partitioning but recurses into ONE side only — the geometric seriesn + n/2 + n/4 + ... converges to 2n.
LC 50: Pow(x, n)
Problem: Implementpow(x, n) for floating x and integer n.
Why it matters: The “halve the exponent each call” trick is divide-and-conquer at its purest — T(n) = T(n/2) + O(1) = O(log n).
n = Integer.MIN_VALUE (-2^31) cannot be negated as a 32-bit int because 2^31 is not representable. Cast to long first.
LC 169: Majority Element (Divide-and-Conquer)
Problem: Find the element that appears more thann/2 times.
Why it matters: Boyer-Moore voting is the O(n) one-pass solution and the expected answer. But the D and C version is what some interviewers want to test the paradigm. Both should be in your toolbox.
LC 53: Max Subarray (Divide-and-Conquer Variant)
Problem: Find the contiguous subarray with the largest sum. Why it matters: Kadane’s algorithm is O(n) and the gold standard. The D and C version is what interviewers ask when they specifically want to test the paradigm or set up a follow-up like “now adapt this to count inversions.”Common Mistakes
Caveats and Traps
Curated LeetCode Practice List
Grouped by difficulty, with annotations.Easy (Warm-up)
| LC# | Problem | What It Teaches |
|---|---|---|
| 169 | Majority Element | D and C variant; Boyer-Moore is the production answer. |
| 53 | Maximum Subarray | Kadane vs D and C trade-off. |
| 108 | Convert Sorted Array to BST | Pick midpoint as root; recurse on halves. |
Medium (LeetCode bread-and-butter)
| LC# | Problem | What It Teaches |
|---|---|---|
| 912 | Sort an Array | Implement merge sort from scratch. |
| 215 | Kth Largest Element in an Array | Quickselect; geometric series gives O(n) avg. |
| 50 | Pow(x, n) | Halve the exponent; O(log n). |
| 240 | Search a 2D Matrix II | Eliminate row/col from corner; O(m + n). |
| 973 | K Closest Points to Origin | Quickselect on distance squared. |
| 241 | Different Ways to Add Parentheses | Split at each operator; combine results. |
| 427 | Construct Quad Tree | Recursive grid splitting into 4 quadrants. |
Hard (Stretch)
| LC# | Problem | What It Teaches |
|---|---|---|
| 23 | Merge k Sorted Lists | Pairwise merge gives O(N log k). |
| 315 | Count of Smaller Numbers After Self | Modified merge sort counts inversions. |
| 493 | Reverse Pairs | Variant: count pairs where i less than j and nums[i] greater than 2*nums[j]. |
| 327 | Count of Range Sum | Merge sort on prefix sums + range counting. |
| 4 | Median of Two Sorted Arrays | Binary search on split point; O(log min(m,n)). |
| 218 | The Skyline Problem | Divide buildings, merge skylines. |
Practice Problems
Count Inversions
Median of Two Arrays
Kth Largest
Search 2D Matrix II
Interview Questions
1. Explain the three steps of Divide and Conquer. What distinguishes it from Dynamic Programming?
What interviewers are really testing: Whether you have a crisp mental model and can articulate the boundary between D&C and DP. Many candidates blur the two, which signals shallow understanding of both paradigms. Strong Answer:- Every D&C algorithm follows three steps: Divide (split the problem into smaller instances of the same problem), Conquer (solve each sub-instance recursively, with a base case for trivial inputs), and Combine (merge sub-solutions into the answer for the original problem).
- The key distinction from DP: D&C subproblems do not overlap. Each sub-instance works on a disjoint portion of the data. Merge sort splits the array in half — left and right halves share no elements. DP subproblems do overlap — computing Fibonacci(5) requires Fibonacci(4) and Fibonacci(3), and Fibonacci(4) also requires Fibonacci(3). DP uses memoization to avoid recomputation; D&C does not need it because no recomputation occurs.
- If you write a D&C solution and find yourself solving the same subproblem multiple times, you either need to add memoization (turning it into DP) or rethink the divide strategy to make subproblems disjoint.
- The Master Theorem gives you the complexity of most D&C recurrences:
T(n) = aT(n/b) + O(n^d). For merge sort,a=2, b=2, d=1, giving O(n log n). Knowing this lets you predict the complexity of a D&C approach before implementing it.
- Can you give an example of a problem where a D&C approach leads to overlapping subproblems, so you must switch to DP?
- What is the Master Theorem? State the three cases and give an example for each.
2. Why is Merge Sort O(n log n) in all cases, while Quick Sort can degrade to O(n^2)?
What interviewers are really testing: Whether you understand how the divide step’s balance affects complexity and can reason about worst-case inputs. This is the most important comparison in the sorting/D&C domain. Strong Answer:- Merge Sort always divides the array exactly in half, regardless of the data. This guarantees
log nlevels of recursion, and each level does O(n) merge work. Total: O(n log n) always — best, average, and worst case. - Quick Sort’s divide step (partitioning) depends on the pivot choice. If the pivot happens to be the median, we get two equal halves and O(n log n). But if the pivot is consistently the smallest or largest element, one partition has n-1 elements and the other has 0. This gives
nlevels of recursion with O(n) work each: O(n^2). - The worst case for Quick Sort with last-element pivot is an already-sorted array: each partition picks the last (largest) element, the left partition has n-1 elements, and the right partition is empty. Recursion depth becomes n instead of log n.
- Mitigation: randomized pivot selection (pick a random index, swap to the pivot position) makes the expected number of comparisons about 1.39n log n with astronomically low probability of hitting O(n^2). Alternatively, “median of three” (pick median of first, middle, last) avoids the sorted-input pathology.
- Introsort monitors recursion depth and switches to Heap Sort when it exceeds 2*log(n). Why is this the threshold?
- Quick Sort is usually faster than Merge Sort in practice despite the same O(n log n) average. Why? What hardware factor explains this?
3. Solve “Maximum Subarray Sum” using Divide and Conquer. Why is this approach O(n log n) instead of Kadane’s O(n)?
What interviewers are really testing: Whether you can implement the crossing-subarray logic correctly and understand the recurrence that produces O(n log n). Bonus: they want to hear you acknowledge that Kadane is better for this specific problem. Strong Answer:- Split the array at the midpoint. The maximum subarray is in one of three places: entirely in the left half, entirely in the right half, or crossing the midpoint. Recursively solve left and right. The crossing case extends greedily from the midpoint leftward and rightward, taking the best sum in each direction.
- The crossing case is O(n) because we scan from mid to left and mid+1 to right. Combined with two recursive calls on half-size arrays, the recurrence is
T(n) = 2T(n/2) + O(n), which by the Master Theorem is O(n log n). - Kadane’s algorithm (single pass, track running max ending at each position) is O(n) because it avoids the divide step entirely. For the maximum subarray problem specifically, Kadane is strictly better.
- So why learn the D&C approach? Because the D&C structure generalizes to problems where Kadane does not apply: “Count Inversions” (count pairs where a later element is smaller), “Count Range Sum” (count subarrays with sum in a range), and “Closest Pair of Points.” These problems need the merge step to count cross-boundary interactions.
- Walk me through the crossing-subarray calculation for
[-2, 1, -3, 4, -1, 2, 1, -5, 4]when the midpoint splits between -1 and 2. - How would you modify this D&C approach to count the number of subarrays with sum in a given range [lower, upper]?
4. How does “Count Inversions” use a modified Merge Sort? Walk through the logic.
What interviewers are really testing: Whether you can augment a standard D&C algorithm with additional bookkeeping. This is the bridge between textbook D&C and real interview problems. Strong Answer:- An inversion is a pair (i, j) where
i < jbutnums[i] > nums[j]. The brute force checks all O(n^2) pairs. Modified merge sort counts inversions during the merge step, achieving O(n log n). - During the merge of two sorted halves, when we pick an element from the right half over the left half, it means the right element is smaller than all remaining elements in the left half. So we add
len(left) - ito the inversion count, whereiis the current pointer in the left half. - Example: merging
[1, 3, 5]and[2, 4, 6]. When we pick 2 from the right, elements 3 and 5 in the left are both greater than 2 and appear before it — that is 2 inversions. When we pick 4, only 5 remains in the left half — 1 inversion. Total inversions from this merge: 3. - The total inversion count is the sum of inversions counted at every merge step across all recursion levels. Each level does O(n) total work, and there are O(log n) levels, giving O(n log n) overall.
len(left) - i.
Follow-ups:
- How is “Count of Smaller Numbers After Self” related to counting inversions? Can you use the same merge sort technique?
- Could you use a Binary Indexed Tree (BIT) instead of merge sort to count inversions? What is the trade-off?
5. Explain Quick Select (finding the kth element). Why is its average time O(n) while Quick Sort is O(n log n)?
What interviewers are really testing: Whether you understand that you can discard half the work at each step by only recursing on one side. This is the core insight that separates selection from sorting. Strong Answer:- Quick Select uses Quick Sort’s partition step, but only recurses on the side containing the kth element. After partitioning, the pivot is at position
p. Ifp == k, return the pivot. Ifk < p, recurse on the left partition. Ifk > p, recurse on the right partition. - Quick Sort recurses on BOTH halves:
T(n) = 2T(n/2) + O(n) = O(n log n). Quick Select recurses on ONE half:T(n) = T(n/2) + O(n). The geometric seriesn + n/2 + n/4 + ... = 2n = O(n). - Worst case is still O(n^2) — the same bad-pivot problem as Quick Sort. With randomized pivot, the expected time is O(n) with very high probability.
- The “median of medians” algorithm guarantees O(n) worst case by choosing a pivot that is guaranteed to eliminate at least 30% of elements. However, the constant factor is about 5x larger, so randomized Quick Select is almost always used in practice.
- In the average case, Quick Select does about 3.4n comparisons. Where does the constant 3.4 come from?
- How does C++‘s
std::nth_elementrelate to Quick Select? What guarantees does it provide?
6. When an interviewer hints “can you solve this in O(n log n)?” and the input is unsorted, what should your first thought be?
What interviewers are really testing: Pattern recognition and the meta-skill of mapping complexity hints to algorithm paradigms. This is an interviewing-about-interviewing question. Strong Answer:- O(n log n) on unsorted data is the Master Theorem signature for D&C:
T(n) = 2T(n/2) + O(n). My first thought is “can I split this problem in half, solve each half, and combine in O(n)?” - The second thought: sorting itself is O(n log n), so maybe sorting is the preprocessing step and the actual algorithm is a linear scan. “Sort first, then greedy” solves a huge class of problems (merge intervals, meeting rooms, task scheduling, two-sum variants on sorted arrays).
- Third thought: if the problem involves searching or counting across a data structure, an O(n log n) approach might mean building a balanced BST, a segment tree, or a BIT and querying n times at O(log n) each.
- The key meta-skill: when the interviewer gives you a target complexity, work backward. O(n log n) means either D&C or sort-then-scan. O(n) means hash map, two pointers, or greedy. O(log n) means binary search. Matching the hint to the paradigm narrows your search space dramatically.
- What if the hint is O(n * sqrt(n))? What algorithm family does that suggest?
- Name three different problems that all have O(n log n) solutions using fundamentally different techniques.
7. Binary search is technically Divide and Conquer. Explain why, and what makes it special compared to other D&C algorithms.
What interviewers are really testing: Whether you see the D&C structure in binary search and understand what makes it degenerate: the combine step is trivial and only one branch is explored. Strong Answer:- Binary search is D&C with two simplifications. Divide: compare the target with the middle element. Conquer: recurse on the left or right half (only one, not both). Combine: trivial — the answer from the sub-instance is the answer for the full instance; no merging needed.
- The recurrence is
T(n) = T(n/2) + O(1), giving O(log n). Compare this to merge sort’sT(n) = 2T(n/2) + O(n) = O(n log n). Binary search only recurses on one half and does O(1) work per level — two multiplicative savings. - What makes binary search special: it requires the input to satisfy a monotonic property (sorted, or more generally, a predicate that partitions the array into true/false regions). This precondition is what allows us to discard half the search space at each step — we know the answer cannot be in the discarded half.
- The D&C lens also explains why binary search generalizes beyond sorted arrays: any problem where you can “divide and eliminate” (not “divide and conquer both”) fits this pattern — bisecting on answer (binary search on answer space), finding peak element, searching in rotated sorted array.
- Give an example of “binary search on the answer” where you are not searching through an array at all.
- What is the difference between
lower_boundandupper_boundin binary search? When do you need each?
Interview Deep-Dive
An interviewer hints 'can you solve this in O(n log n)?' How do you decide between Divide and Conquer, sorting plus linear scan, and a balanced BST/segment tree approach?
An interviewer hints 'can you solve this in O(n log n)?' How do you decide between Divide and Conquer, sorting plus linear scan, and a balanced BST/segment tree approach?
- O(n log n) appears in three distinct algorithmic families, and matching the hint to the right family is a meta-skill that saves minutes in interviews.
- Divide and Conquer: The Master Theorem signature is T(n) = 2T(n/2) + O(n), giving O(n log n). Use D&C when the problem involves splitting data into halves, solving each half independently, and combining in linear time. Signals: “count inversions,” “closest pair of points,” “maximum crossing subarray.” The combine step is where the real work happens.
- Sorting plus linear scan: Many O(n log n) problems are “sort first, then greedy O(n).” Signals: “merge intervals,” “meeting rooms,” “task scheduling,” “two-sum variants on sorted data.” If the problem involves intervals, ranges, or ordering, sorting is likely the first step.
- BST/Segment tree/BIT with n queries: When you need n operations on a data structure that supports O(log n) per query. Signals: “count of elements less than X seen so far,” “range sum queries,” “sliding window maximum.” Each element triggers one O(log n) query, totaling O(n log n).
- Decision process: (1) Does the problem involve splitting and combining? Try D&C. (2) Does sorting simplify the problem to a linear scan? Try sort-then-scan. (3) Does each element need a logarithmic-time query against previously seen data? Try a tree-based structure.
- “Count inversions” can be solved by modified merge sort (D&C, O(n log n)) or by inserting elements into a BIT from right to left and querying prefix sums (O(n log n)). The merge sort approach is elegant and teaches D&C augmentation. The BIT approach is more flexible — it generalizes to “count elements in a range” problems. In an interview, merge sort is the expected answer; the BIT approach earns bonus points for showing breadth.
In the Count Inversions problem, explain exactly why we add len(left) - i to the count when picking from the right half during merge. Walk through a concrete example.
In the Count Inversions problem, explain exactly why we add len(left) - i to the count when picking from the right half during merge. Walk through a concrete example.
- An inversion is a pair (i, j) where i less than j but nums[i] greater than nums[j]. During the merge of two sorted halves, when we pick an element from the right half, it means this right element is smaller than all remaining elements in the left half. Each of those remaining left elements forms an inversion with the right element.
- Why len(left) - i: At the moment we pick
right[j], the left pointer is at positioni. Elementsleft[i], left[i+1], ..., left[len(left)-1]are all greater thanright[j](because the left half is sorted and we chose right[j] as smaller than left[i]). The count of inversions contributed islen(left) - i. - Concrete example: Merging left =
[1, 3, 5]and right =[2, 4, 6].- Compare left[0]=1 vs right[0]=2. Pick 1 (from left). No inversions.
- Compare left[1]=3 vs right[0]=2. Pick 2 (from right). Inversions: left[1]=3 and left[2]=5 are both greater than 2. Count += 3 - 1 = 2.
- Compare left[1]=3 vs right[1]=4. Pick 3 (from left). No inversions.
- Compare left[2]=5 vs right[1]=4. Pick 4 (from right). Inversions: left[2]=5 is greater than 4. Count += 3 - 2 = 1.
- Compare left[2]=5 vs right[2]=6. Pick 5 (from left). No inversions.
- Pick 6 (from right). No remaining left elements. Count += 0.
- Total inversions from this merge: 2 + 1 = 3.
- Common mistake: Counting 1 inversion per right-pick instead of
len(left) - i. This undercounts because one right element can be inverted with multiple left elements simultaneously. - Total complexity: O(n log n) — same as merge sort. Each level of recursion does O(n) work (merging + counting), and there are O(log n) levels.
- Yes. Process the array from right to left. For each element, query the BIT for the count of elements smaller than the current element that have already been inserted (these are elements to the right that are smaller — inversions). Then insert the current element into the BIT.
- Trade-off: BIT approach is O(n log M) where M is the value range (requires coordinate compression to keep M reasonable). Merge sort is O(n log n) regardless of values. BIT is more flexible for variants (e.g., count elements in a specific range), while merge sort is purer D&C and easier to explain for this specific problem.
Quick Select finds the kth element in O(n) average case. Explain why the geometric series n + n/2 + n/4 + ... converges to 2n, and what input triggers the O(n^2) worst case.
Quick Select finds the kth element in O(n) average case. Explain why the geometric series n + n/2 + n/4 + ... converges to 2n, and what input triggers the O(n^2) worst case.
- After partitioning around a pivot, the kth element is in one of the two partitions. Quick Select only recurses into the partition containing k — discarding the other entirely.
- Average case analysis: Assuming the pivot splits the array roughly in half (which happens on average with random pivot selection), the work at each level is: n (partition the full array), then n/2 (partition the half containing k), then n/4, then n/8, and so on. The sum is n + n/2 + n/4 + … = n * (1 + 1/2 + 1/4 + …) = n * 2 = 2n = O(n).
- Why the series converges: The geometric series with ratio 1/2 converges to 2. Each level does half the work of the previous level, so total work is bounded by twice the first level’s work.
- Worst-case O(n^2) trigger: If the pivot is consistently the smallest or largest element, one partition has n-1 elements and the other has 0. Quick Select recurses on n-1 elements, then n-2, then n-3… The sum is n + (n-1) + (n-2) + … + 1 = n(n+1)/2 = O(n^2). This happens with already-sorted input when using the last element as pivot.
- Mitigation: Randomized pivot selection makes the expected time O(n) with overwhelmingly high probability. The “median of medians” algorithm guarantees O(n) worst case by choosing a pivot that eliminates at least 30% of elements, but it has a constant factor of ~5x, so randomized Quick Select is preferred in practice.
- The exact expected comparisons: For randomized Quick Select, the expected number of comparisons is approximately 3.4n (derivable from the recurrence with harmonic numbers).
std::nth_elementguarantees O(n) average case and partially sorts the array: the element at position k is the kth smallest, all elements before it are less than or equal, and all elements after are greater than or equal. It does not sort either partition.- Most implementations use Introselect, which starts with Quick Select but falls back to median-of-medians after too many bad pivots, guaranteeing O(n) worst case. This is analogous to Introsort (Quick Sort + Heap Sort fallback) for sorting.
LeetCode Interview Q&A (Divide and Conquer)
These three are the LeetCode-style D and C questions you should be able to walk through cold. Each has a strong-answer framework, a worked example with full code, three senior follow-ups, common wrong answers, and pointers to related problems.LC 912: Implement Merge Sort and Analyze Complexity
LC 912: Implement Merge Sort and Analyze Complexity
- State the recurrence:
T(n) = 2T(n/2) + O(n). Master Theorem case 2 yields O(n log n). - Walk through three pieces: divide (split at midpoint), conquer (recurse on each half), combine (merge in linear time using two pointers).
- Discuss space: O(n) for the merge buffer + O(log n) recursion. Stable sort.
- Compare to quicksort: O(n log n) average vs guaranteed; O(n) extra space vs O(log n); stable vs unstable.
nums = [5, 2, 8, 1, 9, 3]:- Split into
[5, 2, 8]and[1, 9, 3]. - Recursively sort:
[2, 5, 8]and[1, 3, 9]. - Merge: compare 2 vs 1, take 1. Compare 2 vs 3, take 2. Compare 5 vs 3, take 3. Compare 5 vs 9, take 5. Compare 8 vs 9, take 8. Append 9. Result:
[1, 2, 3, 5, 8, 9].
- Time: Each level of recursion does O(n) total work (every merge across all subproblems sums to n). There are log n levels. Total: O(n log n) — best, worst, AND average case.
- Space: O(n) for the temporary buffers + O(log n) for the recursion stack. Total: O(n).
- Stability: Yes — equal elements preserve relative order because we use
less-than-or-equal(left wins ties).
Collections.sort uses Timsort (a merge-sort variant); for linked lists, merge sort is the canonical answer.sorted, list.sort), Java (Arrays.sort for objects), Rust, Android, V8.- Claiming merge sort is O(1) space — it is not; the merge buffer is O(n).
- Using
less-thaninstead ofless-than-or-equalin the merge comparison — breaks stability when there are duplicates. Equal elements from the right would jump ahead of equal elements from the left. - Recursing as
mergeSort(nums[:mid])andmergeSort(nums[mid+1:])(skippingnums[mid]) — you lose an element. The correct split is[:mid]and[mid:].
- LC 148 Sort List — merge sort on a linked list.
- LC 23 Merge k Sorted Lists — pairwise merge gives O(N log k).
- LC 315 Count of Smaller Numbers After Self — modified merge sort.
LC 215: Kth Largest Using Quickselect -- Best/Worst Case
LC 215: Kth Largest Using Quickselect -- Best/Worst Case
- Frame the problem: “Kth largest” is “(n-k)th smallest from the left in sorted order.” Quickselect partitions around a pivot, finds where the pivot lands, and recurses ONLY into the side containing the target.
- State the geometric series: T(n) = T(n/2) + O(n) = O(n) average. Sum is
n + n/2 + n/4 + ... = 2n. - Worst case: T(n) = T(n-1) + O(n) = O(n^2) when pivot is consistently extreme. Random pivot reduces this to vanishingly small probability.
- For guaranteed O(n) worst case, mention median-of-medians (5x constant factor; rarely worth it).
- Compare to alternatives: heap-based O(n log k), full sort O(n log n).
nums = [3, 2, 1, 5, 6, 4], k = 2:target = 6 - 2 = 4(we want the element at index 4 in sorted order, which is the 2nd largest).- Suppose random pivot is 4 (last element). Partition:
[3, 2, 1] [4] [5, 6]. Pivot lands at index 3. - 3 less than 4, so search right half
[5, 6]. Target index 4 in original. - Partition
[5, 6]with pivot 6:[5] [6]. Pivot lands at index 5. - 5 greater than 4, so search left. New range is just index 4 (
5). - Pivot 5 alone: lands at index 4. Found! Return 5.
- Best case: O(n). Pivot lands near target on first try.
- Average case: O(n) — the geometric series.
- Worst case: O(n^2) without randomization (sorted input + last-element pivot). With randomization, occurs with vanishingly small probability.
n. After picking which side to recurse into, the next partition is on n/2 elements — work is n/2. Then n/4, then n/8. Sum: n + n/2 + n/4 + ... = n * 2 = O(n). The key insight: at each level we do half the work of the previous level, and we only recurse into ONE side, not both. This is what separates Quickselect (O(n)) from Quicksort (O(n log n)).std::nth_element implementations as a fallback (Introselect), but rarely as the primary algorithm. For interview purposes, mention it for credit and explain that randomized Quickselect is preferred unless you need worst-case guarantees (e.g., real-time systems).- Recursing into BOTH partitions like quicksort — that is sorting, not selecting. Correct Quickselect picks one side.
- Using last-element pivot without randomization — TLE on already-sorted test cases, which LeetCode includes adversarially.
- Confusing “kth largest” with “kth smallest” and indexing wrong. Always verify:
target = len(nums) - kfor kth largest.
- LC 973 K Closest Points to Origin — Quickselect on Euclidean distance squared.
- LC 347 Top K Frequent Elements — bucket sort or heap; Quickselect on frequency works too.
LC 50: Pow(x, n) Using Divide-and-Conquer (Handle Negative n)
LC 50: Pow(x, n) Using Divide-and-Conquer (Handle Negative n)
- State the naive O(n) approach and acknowledge it is too slow for
nup to 2^31. - State the recurrence:
x^n = (x^(n/2))^2for even n, elsex * (x^(n/2))^2. T(n) = T(n/2) + O(1) = O(log n). - Handle three edge cases: n equals 0, n is negative, n is
Integer.MIN_VALUE(overflow when negating in 32-bit). - Mention the iterative O(1)-space variant using bitwise exponent traversal.
myPow(2.0, 10):myPow(2, 10)-> half =myPow(2, 5), returnhalf * half.myPow(2, 5)-> half =myPow(2, 2), return2 * half * half.myPow(2, 2)-> half =myPow(2, 1), returnhalf * half.myPow(2, 1)-> half =myPow(2, 0), return2 * half * half = 2.myPow(2, 0)-> return 1.- Unwind:
myPow(2,1)=2,myPow(2,2)=4,myPow(2,5)=2*4*4=32,myPow(2,10)=32*32=1024. Correct.
-n overflows because +2^31 is not representable as a 32-bit signed int. The result of -Integer.MIN_VALUE is Integer.MIN_VALUE again (still negative). Three fixes: (a) cast to long before negating: long N = n; if (N less-than 0) { x = 1/x; N = -N; }; (b) special-case it: if (n == Integer.MIN_VALUE) return myPow(x, n+1) / x;; (c) use Python where integers are arbitrary precision and this never happens. Java/C++ interviewers love this question.n. For n=13 = 0b1101, we multiply x^1 (bit 0), x^4 (bit 2), x^8 (bit 3) — giving x^13. Same recurrence, no recursion, O(1) space. Bonus: this is exactly how modular exponentiation in cryptography (RSA) is implemented.x with a 2x2 matrix [[1, 1], [1, 0]]. Computing the matrix raised to the nth power gives [[F(n+1), F(n)], [F(n), F(n-1)]]. Using fast exponentiation, this is O(log n) with constant matrix multiplication cost. So Fibonacci can be computed in O(log n) instead of O(n). The same trick works for any linear recurrence (Tribonacci, characteristic polynomial recurrences). This is a favorite “bonus” follow-up at quant trading interviews.for i in range(n): result *= x— O(n), TLE for large n.- Calling
myPow(x, n//2)twice in the recursion — defeats the divide-and-conquer benefit, becomes O(n). - Forgetting the negative-n case entirely — LeetCode tests it.
- Forgetting that
0^0should return 1 by convention.
- LC 372 Super Pow — modular exponentiation with the exponent given as an array of digits.
- LC 1922 Count Good Numbers — direct application of
pow_mod. - LC 70 Climbing Stairs — can be solved in O(log n) using matrix exponentiation as a bonus.