Skip to main content

Documentation Index

Fetch the complete documentation index at: https://resources.devweekends.com/llms.txt

Use this file to discover all available pages before exploring further.

Greedy Pattern

What is Greedy?

Greedy algorithms make the locally optimal choice at each step, hoping to find a global optimum. They work when local choices lead to global solutions (greedy choice property + optimal substructure). Imagine you are at a buffet and want to fill your plate with the most delicious items, but you can only visit the buffet line once (no going back). A greedy strategy is: at each station, take the best item available. This works if the items are independent. But if taking the salad now means you will not have room for the steak later, greedy fails — you need DP to consider all possibilities. The critical question for any greedy problem is: does making the locally best choice now ever prevent a globally better outcome later? If the answer is “no” (backed by a proof or strong intuition), greedy works. If “yes” or “maybe,” use DP. Two properties must hold:
  1. Greedy choice property: A globally optimal solution can be built by making locally optimal choices.
  2. Optimal substructure: An optimal solution contains optimal solutions to subproblems.
Quick Recognition: Optimization problems where taking the “best” option at each step leads to the global best. Look for keywords: “minimum number of”, “maximum profit”, “optimal selection”.
LeetCode Pattern Recognition Cheatsheet — when greedy is the intended solution:
  • “Schedule X to maximize Y” — interval scheduling, sort by end time, pick earliest-ending non-overlapping. (LC 435, 452)
  • “Minimum number of intervals / arrows / rooms” — same family, often by re-framing the question. (LC 435, 452, 253)
  • “Jump game style” — track the farthest reachable index in one pass. (LC 55, 45, 1306)
  • “Gas station / circular tour” — find a single starting index where running tank stays non-negative. (LC 134)
  • “Huffman coding / merge stones” — always merge the two smallest using a min-heap. (LC 1167, 502)
  • “Minimum coins with canonical denominations” — always pick the largest coin that fits. (Greedy works only when denominations are canonical, e.g., US coins.)
  • “Partition labels / merge intervals” — expand the current segment until it closes naturally. (LC 763, 56)
  • “Task scheduler with cooldown” — formula based on max-frequency element plus filler. (LC 621)
  • “Buy and sell stock once / many times” — the multi-transaction case is greedy: sum every positive delta. (LC 122)
  • “Lemonade change / two city scheduling” — sort by some delta, pick cheaper alternative greedily. (LC 860, 1029)
Anti-pattern signal: if you can construct a small counterexample where the locally-best choice forecloses a globally-better one, greedy is wrong — reach for DP. “Coin change with arbitrary denominations” (LC 322) is the canonical example.

Pattern Recognition Checklist

Use Greedy When

  • Interval scheduling/merging
  • Minimum coins (certain denominations)
  • Jump game style problems
  • Huffman encoding
  • Activity selection
  • Fractional knapsack

Don't Use When

  • 0/1 Knapsack (need DP)
  • Coin change (arbitrary denominations)
  • Need to try all possibilities
  • Local optimal may miss global
  • Problem has overlapping subproblems

Greedy vs DP: How to Choose

CharacteristicGreedyDP
Makes irrevocable choicesYesNo, tries all
Optimal substructureRequiredRequired
Overlapping subproblemsNot handledHandled via caching
Time complexityUsually O(n log n)Usually O(n^2) or more
Proof requiredYes (exchange argument)No, exhaustive

Common Greedy Patterns

PatternGreedy ChoiceExample
Interval SchedulingSort by end timeNon-overlapping intervals
Interval PartitioningSort by start timeMeeting rooms
Fractional KnapsackBest value/weight ratioMaximize value
Job SchedulingSort by deadlineMinimize late jobs
Huffman CodingMerge least frequentOptimal prefix codes

When to Use

Scheduling

Task scheduling, meeting rooms, intervals

Selection

Choosing items to maximize/minimize value

Graph Problems

MST (Kruskal, Prim), shortest paths (Dijkstra)

String/Array

Partition labels, jump game

Pattern Variations

1. Interval Scheduling

The classic greedy problem. The key insight: sort by end time, then greedily pick the earliest-ending interval that does not overlap the previous selection. Why end time, not start time? An interval that ends early leaves the most room for future intervals — it is the least “greedy” choice in terms of consuming the timeline. Proof sketch (exchange argument): If an optimal solution picks an interval that ends later than our greedy choice, we can swap it with the greedy choice without reducing the count (the greedy choice ends earlier, so everything after still fits). Time: O(n log n) for sorting. Space: O(1) (if sorting in place). Interview variation: “Minimum number of intervals to remove to make the rest non-overlapping” is just n - max_non_overlapping. Same algorithm, different framing.
def max_non_overlapping_intervals(intervals):
    """Maximum number of non-overlapping intervals"""
    if not intervals:
        return 0
    
    # GREEDY CHOICE: sort by end time -- earliest ending first
    intervals.sort(key=lambda x: x[1])
    
    count = 1                   # Always take the first interval
    end = intervals[0][1]       # Track where the last selected interval ends
    
    for i in range(1, len(intervals)):
        if intervals[i][0] >= end:   # No overlap with the last selected interval
            count += 1
            end = intervals[i][1]    # Update the end boundary
    
    return count

2. Activity Selection (Meeting Rooms)

How many meeting rooms do you need so that no two overlapping meetings share a room? The key insight: instead of simulating room assignments, think of it as counting the maximum number of meetings happening at any single point in time. That peak overlap is the answer. The sweep-line trick: Separate start times and end times into two sorted arrays, then sweep through them with two pointers. A start means one more room is needed; an end means one room is freed. The maximum rooms in use at any point is the answer. Why this works: We do not care which meeting starts or ends — only when. Sorting starts and ends independently is valid because we only need the count of simultaneous events, not their pairing. Walked example: meetings = [[0,30], [5,10], [15,20]]. Starts: [0, 5, 15]. Ends: [10, 20, 30]. At time 0: +1 room (1). At time 5: +1 room (2). At time 10: -1 room (1). At time 15: +1 room (2). Peak = 2 rooms. Time: O(n log n) for sorting. Space: O(n) for the separated arrays.
def min_meeting_rooms(intervals):
    """Minimum rooms needed for all meetings"""
    if not intervals:
        return 0
    
    # Separate and sort start times and end times independently
    starts = sorted([i[0] for i in intervals])
    ends = sorted([i[1] for i in intervals])
    
    rooms = 0          # Currently occupied rooms
    max_rooms = 0      # Peak occupancy = answer
    s, e = 0, 0        # Pointers into starts and ends arrays
    
    while s < len(starts):
        if starts[s] < ends[e]:  # A meeting starts before the next one ends
            rooms += 1           # Need one more room
            s += 1
        else:                    # A meeting ends before (or when) the next starts
            rooms -= 1           # Free up a room
            e += 1
        max_rooms = max(max_rooms, rooms)
    
    return max_rooms

3. Jump Game

Jump Game I (can you reach the end?): Track the farthest reachable index. If at any point your current index exceeds max_reach, you are stuck. This is O(n) time, O(1) space. Jump Game II (minimum jumps): A BFS-like greedy approach. Imagine BFS levels where each “level” is the range of indices reachable with one more jump. current_end marks where the current level ends; farthest tracks the farthest we can reach from this level. When we reach current_end, we take a jump and extend to farthest. Walked example (Jump Game II): nums = [2,3,1,1,4]. Level 0: index 0, can reach up to 2. Level 1: indices 1-2, can reach up to 4. We reached the end in 2 jumps. Time: O(n). Space: O(1) for both variants.
def can_jump(nums):
    """Can we reach the last index?"""
    max_reach = 0
    
    for i in range(len(nums)):
        if i > max_reach:        # We cannot reach this index -- stuck!
            return False
        max_reach = max(max_reach, i + nums[i])  # Extend our reach
    
    return True

def min_jumps(nums):
    """Minimum jumps to reach last index (BFS-level greedy)"""
    n = len(nums)
    if n <= 1:
        return 0
    
    jumps = 0
    current_end = 0     # Rightmost index reachable with 'jumps' jumps
    farthest = 0        # Farthest index reachable from any index in current level
    
    for i in range(n - 1):
        farthest = max(farthest, i + nums[i])
        
        if i == current_end:       # We have explored the entire current level
            jumps += 1             # Take one more jump
            current_end = farthest # New level boundary
    
    return jumps

4. Gas Station

There are n gas stations on a circular route. Station i gives you gas[i] fuel, and it costs cost[i] fuel to drive to the next station. Find the starting station from which you can complete the full circuit, or return -1 if impossible. Two key insights:
  1. Global feasibility: If sum(gas) >= sum(cost), a solution must exist. The total fuel available is enough for the total cost, so some starting point works.
  2. Local elimination: If starting from station s, you run dry at station i, then no station between s and i can be a valid start either. Why? Because you arrived at each intermediate station with non-negative fuel (otherwise you would have failed earlier), so starting from any of them with zero fuel would fail even sooner.
Walked example: gas = [1,2,3,4,5], cost = [3,4,5,1,2]. Net at each station: [-2, -2, -2, 3, 3]. Total = 0, so a solution exists. Scanning: start=0, tank goes -2 at station 0, reset to start=1. Tank goes -2 at station 1, reset to start=2. Tank -2 at station 2, reset to start=3. From station 3: tank = 3, then 3+3=6, then 6-2=4, then 4-2=2, then 2-2=0. Works. Answer: 3. Time: O(n). Space: O(1).
def can_complete_circuit(gas, cost):
    """Find starting gas station to complete circuit"""
    total_tank = 0       # Tracks global feasibility (sum of all net gains)
    current_tank = 0     # Tracks local feasibility from current start
    start = 0            # Candidate starting station
    
    for i in range(len(gas)):
        total_tank += gas[i] - cost[i]
        current_tank += gas[i] - cost[i]
        
        if current_tank < 0:       # Ran out of gas starting from 'start'
            start = i + 1           # Skip all stations from old start to i
            current_tank = 0        # Reset for fresh start
    
    return start if total_tank >= 0 else -1  # Global check: is a solution possible?

5. Partition Labels

Partition a string into as many parts as possible so that each letter appears in at most one part. This is secretly an interval merging problem in disguise. The key reframing: Each character c defines an interval [first_occurrence(c), last_occurrence(c)]. The constraint “each letter in at most one part” means no character’s interval can be split across partitions. So the problem becomes: merge all overlapping character intervals and report the merged segment sizes. The single-pass trick: Scan left to right, expanding end to cover the last occurrence of every character seen so far. When i == end, all characters in the current partition have their last occurrences within [start, end] — safe to cut. Walked example: s = “ababcbacadefegdehijhklij”. Character a spans [0,8], b spans [1,5], c spans [4,7], d spans [9,14], e spans [10,15], etc. First merged interval: [0,8] (length 9). Second: [9,15] (length 7). Third: [16,23] (length 8). Result: [9, 7, 8]. Time: O(n) — two passes (one for last-occurrence map, one for scanning). Space: O(1) extra (the map has at most 26 entries for lowercase letters).
def partition_labels(s):
    """Partition string so each letter appears in at most one part"""
    # PASS 1: Find last occurrence of each character
    last = {c: i for i, c in enumerate(s)}
    
    result = []
    start = 0    # Left boundary of current partition
    end = 0      # Right boundary (expands as we see characters with later last-occurrences)
    
    # PASS 2: Expand partition greedily, cut when all intervals are closed
    for i, c in enumerate(s):
        end = max(end, last[c])   # Expand to include c's full range
        
        if i == end:              # All characters in [start, end] have their last occurrence here
            result.append(end - start + 1)
            start = i + 1         # Begin next partition
    
    return result

Classic Problems

Pattern: Sort by end time, greedily select non-overlappingKey: Earliest end time leaves most room for future
Pattern: Track maximum reachable indexKey: At each position, update farthest reach
Pattern: Process most frequent tasks firstKey: Idle time depends on most frequent task
Pattern: Two passes (left to right, right to left)Key: Satisfy constraints from both directions

Common Mistakes

Avoid These Pitfalls:
  1. Applying greedy when DP is needed: The coin change problem with arbitrary denominations (e.g., coins [1, 3, 4], amount 6) fails with greedy (greedy picks 4+1+1=3 coins, optimal is 3+3=2 coins). Greedy only works for specific coin systems (like US currency where each coin is at least 2x the next smaller one).
  2. Wrong sorting key for intervals: Sorting by start time for interval scheduling gives wrong results. Example: [1,100], [2,3], [4,5] — sorting by start picks [1,100] first and only fits 1 interval, but sorting by end picks [2,3] and [4,5] for 2 intervals.
  3. Not being able to justify why greedy works: In an interview, simply saying “I’ll use greedy” is not enough. You need to briefly argue why the locally optimal choice leads to the globally optimal solution (exchange argument or “greedy stays ahead” reasoning). Even a one-sentence justification shows understanding.
  4. Edge cases: Empty input, single element, all intervals identical, values of zero. Test your greedy with the smallest non-trivial input to catch off-by-one errors.
  5. Confusing “minimize removals” with “maximize selections”: These are complementary views of the same problem. “Minimum intervals to remove for non-overlapping” = n - max_non_overlapping. Solving the wrong formulation is a common interview mistake.

How to Prove Greedy Works

1

Greedy Stays Ahead

Show that after each step, greedy is at least as good as optimal
2

Exchange Argument

Show that swapping a choice in optimal with greedy’s choice doesn’t hurt
3

Structural Induction

Prove base case + if true for size k, true for size k+1

Interview Problems by Company

ProblemCompanyKey Concept
Assign CookiesAmazonSort both arrays
Lemonade ChangeGoogleTrack denominations
Best Time to Buy StockAll FAANGTrack minimum so far

Interview Tips

Script for interviews:
  1. “This looks like a greedy problem because local choices lead to global optimum.”
  2. “My greedy strategy is: [describe the choice]”
  3. “This works because [brief justification]”
  4. “I’ll sort by [criteria] first, then iterate greedily.”
  5. “Time is O(n log n) for sorting, O(n) for the greedy pass.”
Interviewer SaysYou Should Think
”Minimum operations”Greedy or DP
”Schedule tasks/meetings”Interval scheduling
”Maximum profit/value”Greedy or DP
”Prove it’s optimal”Exchange argument
”What if greedy fails?”Consider DP
def solve_greedy(items):
    # Step 1: Sort by greedy criteria
    items.sort(key=lambda x: x.end_time)  # or other criteria
    
    result = 0
    current_state = initial_state
    
    # Step 2: Make greedy choice at each step
    for item in items:
        if can_include(item, current_state):
            result += 1
            current_state = update(current_state, item)
    
    return result

Greedy Templates

Two reusable shapes you will write again and again. Memorize the structure, fill in the comparator and update.

Template 1: Sort Plus One Pass

Python
def greedy_sort_then_pick(items):
    items.sort(key=lambda x: x[1])   # sort by some key (often end time)
    result = []
    last = -float('inf')
    for x in items:
        if x[0] >= last:             # condition for picking
            result.append(x)
            last = x[1]              # update state
    return result
Java
Arrays.sort(items, (a, b) -> Integer.compare(a[1], b[1]));  // sort by key
List<int[]> result = new ArrayList<>();
int last = Integer.MIN_VALUE;
for (int[] x : items) {
    if (x[0] >= last) {
        result.add(x);
        last = x[1];
    }
}
C++
sort(items.begin(), items.end(), [](auto& a, auto& b){ return a[1] < b[1]; });
vector<vector<int>> result;
int last = INT_MIN;
for (auto& x : items) {
    if (x[0] >= last) {
        result.push_back(x);
        last = x[1];
    }
}

Template 2: Single-Pass Tracker (Jump Game style)

Python
def greedy_one_pass(arr):
    farthest = 0
    for i, val in enumerate(arr):
        if i > farthest:        # cannot reach this index
            return False
        farthest = max(farthest, i + val)
    return True

Counter-Pattern — Where Greedy Fails

The classic counterexample: minimum number of coins with arbitrary denominations.
# Greedy (WRONG): always pick the largest coin that fits.
def coin_change_greedy(coins, amount):
    coins.sort(reverse=True)
    count = 0
    for c in coins:
        while amount >= c:
            amount -= c
            count += 1
    return count if amount == 0 else -1

# coins = [1, 3, 4], amount = 6
# Greedy: 4 + 1 + 1 = 3 coins
# Optimal: 3 + 3 = 2 coins  -- greedy missed it
This is why “minimum coins” with arbitrary denominations is a DP problem (LC 322), not a greedy one. Always run a counterexample on small inputs before committing to greedy.

Worked LeetCode Problems

Six staple greedy problems. If you can write each of these from scratch in eight minutes, you have the pattern.

LC 55 — Jump Game (Medium)

Pattern: Single-pass tracker — maintain the farthest index reachable so far.
Python
def canJump(nums):
    farthest = 0
    for i, jump in enumerate(nums):
        if i > farthest:                # we cannot even get to index i
            return False
        farthest = max(farthest, i + jump)
    return True
Why greedy works: at index i, the maximum reach is i + nums[i]. If any reachable index updates farthest to at least n - 1, we are done. We never have to consider “what if I had jumped less here so I could jump more later” because the only thing that matters is the maximum reach, and a larger jump at the current index can never hurt.

LC 45 — Jump Game II (Medium)

Pattern: BFS-flavored greedy — count the layers of “current reach window.”
Python
def jump(nums):
    jumps = 0
    current_end = 0
    farthest = 0
    for i in range(len(nums) - 1):
        farthest = max(farthest, i + nums[i])
        if i == current_end:            # exhausted current jump's reach
            jumps += 1
            current_end = farthest      # commit to a new jump
    return jumps
Why greedy works: treat each “jump” as a BFS layer. Inside the current layer, scan everything reachable, tracking the farthest you can reach in one more jump. When you hit the layer boundary, increment the jump count. Each index is visited once — O(n).

LC 134 — Gas Station (Medium)

Pattern: Two-pointer-flavored greedy with a clever invariant.
Python
def canCompleteCircuit(gas, cost):
    if sum(gas) < sum(cost):
        return -1                       # impossible globally
    tank = 0
    start = 0
    for i in range(len(gas)):
        tank += gas[i] - cost[i]
        if tank < 0:                    # cannot reach i+1 from start
            start = i + 1
            tank = 0
    return start
Why greedy works: if the total gas is at least the total cost, a valid start exists. If your tank goes negative anywhere between start and i, then no index in [start, i] can be a valid start (each one would have less or equal tank when reaching i). Skip them all and try i + 1. One pass.

LC 435 — Non-overlapping Intervals (Medium)

Pattern: Sort by end time, count compatible intervals, subtract.
Python
def eraseOverlapIntervals(intervals):
    intervals.sort(key=lambda x: x[1])
    last_end = -float('inf')
    keep = 0
    for start, end in intervals:
        if start >= last_end:
            keep += 1
            last_end = end
    return len(intervals) - keep
Why end time and not start time? End time minimizes the resource consumed. Sorting by start time can pick a long interval that blocks two short ones. The exchange argument: replace any “later-ending” choice with an “earlier-ending” alternative; the count never decreases.

LC 621 — Task Scheduler (Medium)

Pattern: Formula based on the most frequent task — no simulation needed for the standard variant.
Python
def leastInterval(tasks, n):
    from collections import Counter
    freq = Counter(tasks).values()
    max_freq = max(freq)
    count_max = sum(1 for f in freq if f == max_freq)
    return max(len(tasks), (max_freq - 1) * (n + 1) + count_max)
The formula: imagine laying down the most frequent task in slots n + 1 apart: A _ _ A _ _ A. That is (max_freq - 1) * (n + 1) + 1. If multiple tasks tie for the top frequency, each one extends the final row by 1 slot, hence + count_max. The max(len(tasks), formula) guards the case where tasks already fill all slots without idle time.

LC 763 — Partition Labels (Medium)

Pattern: Pre-compute last-occurrence index, then expand the current partition.
Python
def partitionLabels(s):
    last = {c: i for i, c in enumerate(s)}
    result = []
    start = 0
    end = 0
    for i, c in enumerate(s):
        end = max(end, last[c])
        if i == end:
            result.append(end - start + 1)
            start = i + 1
    return result
Why greedy works: a partition must contain every occurrence of each character it includes. So the partition ending at i must extend at least to last[c] for every c seen. When i finally equals end, the partition is closed — no character inside it appears later.
Greedy traps to remember in interviews:
  1. Greedy doesn’t always work. Always sketch a counterexample before committing. The exchange argument or a small disproof should justify the approach in 30 seconds.
  2. Coin change with arbitrary denominations is NOT greedy. US coins (1, 5, 10, 25) are canonical; arbitrary denominations like (1, 3, 4) break greedy. Use DP (LC 322).
  3. Interval scheduling: sort by END time, not start time. Sorting by start time picks a long interval first that blocks two short ones.
  4. Meeting Rooms II is NOT just sort-and-pick. It is interval-partitioning — count overlapping intervals using a min-heap of end times or a sweep line.
  5. 0/1 Knapsack is not greedy. Even sorted by value-per-weight, greedy fails because picking one item prevents combining two smaller items. Fractional knapsack is greedy.
  6. Jump Game II is greedy, but the BFS layer mental model is essential. Naive “always jump as far as possible” fails — you must scan the whole current layer before committing to the next jump.
  7. Task Scheduler formula breaks if cooldowns are heterogeneous. With per-task cooldowns, fall back to simulation (max-heap plus cooldown queue).
Three sentences to say in any greedy interview:
  • “Let me first try a small counterexample to see if greedy is even correct.”
  • “I’ll prove this with an exchange argument: swapping any non-greedy choice for the greedy choice does not decrease the answer.”
  • “If greedy fails, the natural fallback here is DP — specifically [name the DP recurrence].”

Curated LeetCode List

Sorted by difficulty and sub-pattern. Solve in this order; each one reinforces a different greedy idea.
LC #ProblemDifficultySub-pattern
860Lemonade ChangeEasyGreedy by denomination
122Best Time to Buy and Sell Stock IIEasySum positive deltas
55Jump GameMediumSingle-pass reach tracker
45Jump Game IIMediumBFS-layer greedy
134Gas StationMediumSkip-to-next-valid-start
435Non-overlapping IntervalsMediumSort by end time
452Burst Balloons (Min Arrows)MediumSort by end, count overlaps
621Task SchedulerMediumFrequency formula
763Partition LabelsMediumExpand to last occurrence
1029Two City SchedulingMediumSort by cost-A minus cost-B

Practice Problems

ProblemDifficultyLink
Jump GameMediumLeetCode 55
Gas StationMediumLeetCode 134
Partition LabelsMediumLeetCode 763
Task SchedulerMediumLeetCode 621
Non-overlapping IntervalsMediumLeetCode 435

Practice Roadmap

1

Day 1: Basic Greedy

  • Solve: Assign Cookies, Lemonade Change
  • Focus: Simple greedy choices
2

Day 2: Interval Problems

  • Solve: Non-overlapping Intervals, Meeting Rooms
  • Focus: Sort by end time pattern
3

Day 3: Array Greedy

  • Solve: Jump Game I and II, Gas Station
  • Focus: Tracking reachability
4

Day 4: Complex Greedy

  • Solve: Task Scheduler, Candy
  • Focus: Multi-pass greedy
Interview Tip: Before applying greedy, ask yourself: “Does taking the best choice now prevent a better overall solution?” If yes, consider DP instead.

Interview Questions

Deep-dive questions that test real understanding of the Greedy pattern, not just recognizing “sort then iterate.” Expect interviewers to challenge your correctness proofs and push you toward edge cases where greedy breaks.
Difficulty: IntermediateWhat interviewers are really testing: Whether you understand why the greedy choice is correct, not just what the greedy choice is. Sorting by end time is the textbook answer, but most candidates cannot explain why sorting by start time fails. This separates candidates who memorize templates from those who can derive greedy strategies from first principles.Strong answer:
  • The greedy goal is to leave as much room as possible for future intervals. An interval that ends earliest frees up the timeline the most. Sorting by start time tells you nothing about when an interval finishes — an interval starting at time 1 could end at time 1000, blocking everything.
  • Concrete counterexample: Intervals [1,10], [2,3], [4,5]. Sorted by start time, you pick [1,10] first and can fit nothing else (1 interval). Sorted by end time: [2,3], [4,5], [1,10] — you pick [2,3], then [4,5] (2 intervals). The start-time-first approach greedily chose an interval that monopolized the timeline.
  • Sorting by end time works because of the exchange argument: if the optimal solution picks an interval that does not end earliest, you can swap it for the earliest-ending interval without losing any capacity for future selections (the earliest-ending interval frees up at least as much space). So greedy stays ahead of or ties optimal at every step.
  • Edge nuance: Sorting by start time does work for a different problem — interval partitioning (minimum meeting rooms), where you process intervals in order of arrival and allocate rooms. Different greedy problems demand different sorting keys.
Red flag answer: “We sort by end time because that is the standard approach for interval problems.” This is pure memorization. If the candidate cannot produce a counterexample showing why start-time sorting fails, or articulate the exchange argument even informally, they will crumble on any variation the interviewer throws at them.Follow-ups:
  1. What if you need to minimize the number of intervals to remove to make the rest non-overlapping? Does the same greedy choice apply? (Answer: yes, it is the complement problem. Maximize non-overlapping = minimize removals. Same sort by end time.)
  2. What if intervals have weights and you want to maximize total weight of non-overlapping intervals? Does greedy still work? (Answer: no. This is the Weighted Job Scheduling problem, which requires DP. Greedy by end time ignores weights, and greedy by weight ignores overlaps. You need dp[i] = max(dp[i-1], weight[i] + dp[last_compatible[i]]) with binary search.)
Difficulty: SeniorWhat interviewers are really testing: Whether you can prove a greedy algorithm is correct, not just claim it works. At the senior level, interviewers expect you to reason about correctness rigorously. If you propose a greedy approach in a system design or algorithm round and cannot justify it, you are hand-waving — and strong interviewers will call you on it.Strong answer:
  • The exchange argument works in three steps: (1) Assume an optimal solution exists. (2) Show you can exchange one of optimal’s choices for the greedy choice without making the solution worse. (3) Repeat until optimal matches greedy, proving greedy is also optimal.
  • For activity selection: Let G = {g1, g2, ...} be greedy’s selections (sorted by end time) and O = {o1, o2, ...} be some optimal solution. Suppose o1 != g1. Since greedy picks the earliest-ending activity, g1.end <= o1.end. Replace o1 with g1 in O. No activity in O overlapped with o1, and since g1 ends even earlier, g1 cannot overlap them either. So O' = {g1, o2, o3, ...} is still valid and has the same size. Repeat for o2 vs g2, and so on.
  • The critical realization: The exchange argument does not prove greedy finds the only optimal solution — it proves greedy finds an optimal solution. There may be multiple optimal solutions.
  • When exchange argument breaks: If the “exchange” makes the solution worse (e.g., weighted intervals where swapping a high-weight interval for the earliest-ending low-weight one reduces total value), you know greedy is incorrect for that problem. This is your litmus test: can you swap without damage?
Red flag answer: “Greedy works because we always pick the best option” — this is a tautology, not a proof. Another red flag: confusing the exchange argument with proof by induction. They are different techniques (though they can be combined). If a candidate says “it is obvious that the greedy choice is optimal,” they have not actually proven anything.Follow-ups:
  1. What is the “greedy stays ahead” proof technique and how does it differ from exchange? (Answer: “greedy stays ahead” shows that after each step, greedy’s partial solution is at least as good as any other algorithm’s partial solution. It is more direct but requires defining a clear measure of “ahead.” Exchange works by modifying the optimal solution to match greedy.)
  2. Can you sketch a quick exchange argument for why Huffman coding produces the optimal prefix code? (Answer: if two least-frequent characters are not siblings at the maximum depth, swap them with the actual deepest siblings. Since they have the lowest frequency, pushing them deeper cannot increase total cost. So the greedy choice of combining least-frequent first is safe.)
Difficulty: FoundationalWhat interviewers are really testing: Whether you understand the greedy invariant, not just the code. Many candidates can write the max_reach loop but cannot explain why it is correct or what the loop invariant is. This also tests whether a candidate thinks about problem properties (is the problem even greedy?) or jumps straight to coding.Strong answer:
  • The invariant: After processing index i, max_reach stores the farthest index reachable using any combination of jumps from indices 0..i. If at any point i > max_reach, there is a “gap” we cannot cross, so we return False.
  • Why greedy works here: At each index, you do not need to decide which specific jump to take — you just need to know whether the current index is reachable and how far you can reach from here. The maximum reachable index is monotonically non-decreasing as you scan left to right, so the “best local information” (maximum reach from any visited index) is also globally sufficient.
  • Why this is greedy, not DP: There are no overlapping subproblems. You never need to revisit an index. A single left-to-right scan with a running max is sufficient. DP would be O(n^2) (for each position, check all previous positions that can reach it), but the greedy observation that max_reach only grows eliminates that.
  • Key insight for Jump Game II (min jumps): Extend the idea with a “current level end” boundary. Every time you reach the boundary, you must take a jump. Track the farthest you can reach within the current level. This is essentially BFS in disguise, where each “level” is one jump.
# The invariant: if i <= max_reach, position i is reachable
for i in range(len(nums)):
    if i > max_reach:     # Unreachable gap detected
        return False
    max_reach = max(max_reach, i + nums[i])  # Monotonically non-decreasing
Red flag answer: “We just keep track of how far we can jump and update it.” This describes what the code does but not why it is correct. If the candidate cannot state the invariant (every index up to max_reach is reachable), they will not be able to adapt this approach to variations like Jump Game II or jump-with-cost problems.Follow-ups:
  1. Jump Game II asks for the minimum number of jumps. How does the greedy approach change, and why is it BFS-like? (Answer: maintain current_end and farthest. When you hit current_end, increment jumps and set current_end = farthest. Each “level” between boundaries represents one jump, just like BFS levels.)
  2. What if each jump has a cost and you want to minimize total cost to reach the end? Does greedy still work? (Answer: no. Greedy by max-reach ignores cost, and greedy by min-cost ignores reachability. You need DP: dp[i] = min(dp[j] + cost[j]) for all j that can reach i. Or use a min-heap for an O(n log n) approach.)
Difficulty: SeniorWhat interviewers are really testing: Whether you can explain the non-obvious correctness of a greedy algorithm where the “why” is much harder than the “what.” Most candidates can code the solution but cannot explain why skipping all stations between the old start and the failure point is safe. This is a proof-of-understanding question.Strong answer:
  • The algorithm: Track total_tank (global feasibility) and current_tank (local feasibility from current start). If current_tank drops below 0 at station i, set start = i + 1 and reset current_tank = 0.
  • Why skipping stations is safe: If starting from station s, you run out of gas at station i, then no station between s and i (inclusive) can be a valid start. Here is why: station s was chosen because it had positive or zero net gas. As you drove from s, the cumulative gas was non-negative at every intermediate station k (otherwise you would have failed earlier at k). Since even with that “head start” of accumulated gas from s..k-1, you still failed at i, starting fresh from k (with zero accumulated gas) would fail even sooner.
  • Global feasibility check: total_tank >= 0 at the end guarantees a solution exists. If the total gas >= total cost, there must be some starting point that works (this follows from the circular nature of the problem — the deficit from the worst segment is compensated by surplus elsewhere).
  • Why the last start is correct: After the final reset, every subsequent station from start to n-1 is reachable (otherwise another reset would have occurred). And we know the total is non-negative, so the deficit from stations 0..start-1 is covered by the surplus from start..n-1 carried around.
# The key insight: if you fail at i starting from s,
# no station in [s, i] can be the answer
if current_tank < 0:
    start = i + 1      # Skip entire [old_start..i] range
    current_tank = 0    # Fresh start
Red flag answer: “If the total gas is enough, we just find where to start by resetting.” This describes the code but gives no justification for why the reset skips are valid. If a candidate says “it just works because of the greedy property,” that is a circular non-answer. The specific proof that no station between old-start and failure-point can work is the crux.Follow-ups:
  1. What if there can be multiple valid starting stations? Does this algorithm find all of them or just one? (Answer: it finds one. The problem guarantees uniqueness if a solution exists. If uniqueness is not guaranteed, you would need to test each candidate start in O(n), giving O(n^2) worst case, or use more sophisticated analysis.)
  2. What if the circuit is not circular but linear (start at 0, reach station n-1)? Does the same approach work? (Answer: simpler — just scan left to right. If current_tank ever goes negative, there is no solution. No need for the circular logic or total_tank check.)
Difficulty: IntermediateWhat interviewers are really testing: Whether you understand the structural reason greedy fails — the greedy choice property — not just “0/1 Knapsack needs DP.” This is the most common greedy-vs-DP comparison question and interviewers expect a precise, example-backed answer.Strong answer:
  • The core difference: In Fractional Knapsack, you can take a fraction of an item, so the greedy choice (best value-per-weight ratio) is always safe — taking a fraction of the best item never blocks a better overall solution. In 0/1 Knapsack, you must take or leave the entire item, so a high-ratio item might consume capacity that could have been better used by a combination of smaller items.
  • Concrete counterexample: Capacity = 10. Items: A (weight 6, value 6, ratio 1.0), B (weight 5, value 5, ratio 1.0), C (weight 5, value 5, ratio 1.0). Greedy by ratio picks A first (or either, since ratios are equal), leaving capacity 4 — cannot fit B or C. Total value: 6. Optimal: take B + C = value 10, weight 10. Greedy missed it because taking A blocked the better combination.
  • The broken property: Greedy requires the greedy choice property — that making the locally optimal choice does not foreclose a globally optimal solution. Fractional Knapsack has this property (you can always “undo” a greedy choice by taking slightly less). 0/1 Knapsack does not (you cannot undo taking a whole item without removing it entirely, which changes the solution structure).
  • The DP solution uses overlapping subproblems: dp[i][w] = max(dp[i-1][w], dp[i-1][w-weight[i]] + value[i]). This exhaustively considers both “take” and “skip” for each item, which greedy does not do.
Red flag answer: “0/1 Knapsack needs DP because you cannot split items.” While technically correct, this is surface-level. The candidate should explain why not splitting items breaks the greedy choice property, ideally with a counterexample. Another red flag: saying “greedy does not work for any knapsack problem” — it works perfectly for the fractional variant.Follow-ups:
  1. What about the Unbounded Knapsack (unlimited copies of each item)? Can greedy work there? (Answer: still no. The same type of counterexample applies — a high-ratio item might waste capacity if it does not divide evenly into the remaining space. Unbounded Knapsack uses a different DP recurrence: dp[w] = max(dp[w], dp[w-weight[i]] + value[i]) for all items.)
  2. Is there a special case of 0/1 Knapsack where greedy does work? (Answer: yes, when all items have the same weight, or when the value is proportional to weight. In these degenerate cases, the greedy choice property holds because no trade-off between ratio and capacity exists.)
Difficulty: IntermediateWhat interviewers are really testing: Whether you understand the greedy invariant that drives partition expansion. This problem looks simple, but explaining why it produces the minimum (not just some valid) partition requires careful reasoning about constraints. Interviewers also test whether you can connect this to interval merging.Strong answer:
  • Reframe as an interval problem: Each character c in the string defines an interval [first_occurrence(c), last_occurrence(c)]. The constraint “each letter appears in at most one part” means these intervals cannot be split across partitions. The problem reduces to: merge all overlapping character intervals, then count the merged intervals.
  • Why the single-pass algorithm works: As you scan left to right, end = max(end, last[c]) is essentially merging the current character’s interval into the growing partition. When i == end, all character intervals that started in this partition have also ended — no character in this partition appears later. That is exactly the condition for a valid partition boundary.
  • Why it is minimum: You cut at the earliest possible position each time (as soon as i == end). Delaying a cut would only merge more characters into the current partition, reducing the total number of partitions. Since we cut as early as possible, we maximize the number of partitions, which means we minimize partition sizes — but actually, we maximize partition count, which is the same as producing the minimum-size partitions that satisfy the constraint.
  • Time/Space: O(n) time (two passes: one for last-occurrence map, one for scanning). O(1) extra space (the last-occurrence map has at most 26 entries for lowercase letters).
# Reframing: each character defines an interval [first, last]
# The algorithm merges overlapping intervals greedily
for i, c in enumerate(s):
    end = max(end, last[c])  # Expand partition to include c's full range
    if i == end:              # All intervals in this partition are closed
        result.append(end - start + 1)
        start = i + 1
Red flag answer: “We track the last occurrence and cut when we reach it.” This describes the mechanism but not the reasoning. If the candidate cannot connect this to interval merging or explain why cutting at i == end is correct (not just “it seems right”), they lack the deeper understanding. Another red flag: not realizing this produces the maximum number of partitions (not just any valid partition).Follow-ups:
  1. What if the constraint changes to “each letter appears in at most TWO parts”? How does the approach change? (Answer: significantly harder. You would need to track not just the last occurrence but allow intervals to be split once. This likely requires DP or a more complex interval analysis — the clean greedy approach breaks.)
  2. Can you solve this problem using an explicit interval-merging approach and verify it gives the same result? (Answer: yes. Build intervals [first[c], last[c]] for each character, merge overlapping intervals, and count. It is O(n + 26 log 26) which simplifies to O(n). Same result, but the direct single-pass approach is cleaner for an interview.)
Difficulty: SeniorWhat interviewers are really testing: Whether you can apply greedy to a scheduling problem that is more nuanced than basic interval selection. This is the Job Sequencing with Deadlines problem, and the greedy strategy is non-obvious: you want to schedule tasks as late as possible to keep earlier slots open. It tests both greedy design and the ability to reason about slot allocation.Strong answer:
  • Greedy strategy: Sort tasks by penalty (or profit) in decreasing order. For each task, schedule it in the latest available time slot before its deadline. If no slot is available, skip the task.
  • Why latest-available slot: Scheduling a task as late as possible before its deadline preserves earlier time slots for tasks with earlier deadlines. If you greedily assign tasks to their earliest available slot, you might block a higher-penalty task that needed that slot.
  • Implementation with Union-Find: The naive approach (for each task, scan backward from deadline to find an open slot) is O(n^2). Using a Union-Find (Disjoint Set) structure where each time slot points to the next available slot, you can find the latest open slot in near O(1) amortized time. Total: O(n log n) for sorting + O(n * alpha(n)) for scheduling, which is effectively O(n log n).
  • Concrete example: Tasks: {A: deadline 2, penalty 100}, {B: deadline 1, penalty 50}, {C: deadline 2, penalty 30}. Sort by penalty: A, B, C. Schedule A in slot 2 (latest before deadline 2). Schedule B in slot 1 (latest before deadline 1). C wants slot 2 or 1, both taken — skip. Total: 150. If we had greedily assigned A to slot 1 (earliest), B would have no slot, and we would get A + C = 130. Wrong.
# Greedy: sort by penalty descending, assign latest available slot
def job_sequencing(tasks):
    tasks.sort(key=lambda t: t.penalty, reverse=True)
    max_deadline = max(t.deadline for t in tasks)
    slots = [False] * (max_deadline + 1)  # slot[0] unused

    total_penalty = 0
    for task in tasks:
        # Find latest available slot <= deadline
        for slot in range(task.deadline, 0, -1):
            if not slots[slot]:
                slots[slot] = True
                total_penalty += task.penalty
                break

    return total_penalty
Red flag answer: “Sort by deadline and process in order.” This is a common wrong greedy choice. Sorting by deadline alone ignores the penalty values — you might complete low-penalty tasks on time while missing high-penalty ones. Another red flag: not considering the slot-allocation aspect at all, treating it like interval scheduling.Follow-ups:
  1. How would you optimize the slot-finding step from O(n) per task to near O(1)? (Answer: Union-Find. Each slot points to itself initially. When a slot is used, union it with slot - 1. Finding the latest available slot is a find(deadline) operation that returns the root of the component, which is the latest open slot.)
  2. What if tasks have different durations (not all unit time)? Does the greedy approach still work? (Answer: no. Variable durations make this a variant of the scheduling problem that is NP-hard in general. You would need DP or approximation algorithms depending on the specific constraints.)
Difficulty: SeniorWhat interviewers are really testing: Whether you can visualize the idle-slot structure and reason about the two regimes (idle slots present vs. no idle slots needed). This is one of the most commonly asked greedy problems at Meta and Amazon, and the formula is deceptively simple — understanding when and why it breaks is what separates strong candidates.Strong answer:
  • Visualization: Imagine placing the most frequent task (say A appears max_freq times) into a grid. You create max_freq - 1 “gaps” of size n between consecutive A’s (the cooldown). The formula (max_freq - 1) * (n + 1) counts the slots for all gaps plus the A’s that start them. Then add count_of_max_freq_tasks for the final row (tasks tied for highest frequency).
  • Example: Tasks A A A B B B, n=2. Grid: A _ _ | A _ _ | A. Gaps = 2, each size n+1=3, plus final row with A and B = 2. Total = 2*3 + 2 = 8. Fill in B’s: A B _ A B _ A B. Length 8.
  • When the formula “breaks”: When there are enough distinct tasks to fill all idle slots with no idle time remaining. In that case, total_tasks > formula_result, and the answer is simply len(tasks). The actual answer is max(len(tasks), formula_result).
  • Why this is greedy: The greedy insight is that the most frequent task is the bottleneck. All other tasks fit into the gaps created by the most frequent task. By processing the highest-frequency task first, you determine the structure, and everything else fills in.
def least_interval(tasks, n):
    freq = Counter(tasks)
    max_freq = max(freq.values())
    # Count how many tasks share the maximum frequency
    count_max = sum(1 for f in freq.values() if f == max_freq)

    # Formula: gaps created by most frequent task + final row
    formula = (max_freq - 1) * (n + 1) + count_max

    # If we have more tasks than the formula allows, no idle needed
    return max(len(tasks), formula)
Red flag answer: “You just simulate putting tasks into a priority queue.” While the simulation approach (max-heap + cooldown queue) works and is O(n log 26) per step, it misses the mathematical insight entirely. If a candidate cannot derive or explain the formula, they are coding without understanding. Another red flag: forgetting the max(len(tasks), formula) — this means they do not understand what happens when idle time disappears.Follow-ups:
  1. What if the cooldown period n is 0? Does the formula still work? (Answer: yes. (max_freq - 1) * 1 + count_max = max_freq - 1 + count_max. But max(len(tasks), this) = len(tasks) since no idle is ever needed. The formula gracefully degrades.)
  2. What if instead of a cooldown period, two specific tasks cannot be adjacent (like a graph coloring constraint)? Does this greedy approach generalize? (Answer: no. This becomes a graph-based scheduling problem. The simple frequency-based formula assumes all tasks have the same mutual cooldown, which is a uniform constraint. Pairwise constraints require chromatic scheduling or interval coloring techniques.)
Difficulty: SeniorWhat interviewers are really testing: Whether you understand directional constraint propagation — the idea that a single greedy pass can only enforce constraints in one direction, and some problems require satisfying constraints from both directions simultaneously. This is a deeper algorithmic thinking test.Strong answer:
  • The problem: Each child gets at least 1 candy. If a child has a higher rating than a neighbor, they must get more candies. Minimize total candies.
  • Why one pass fails: A left-to-right pass enforces “if rating[i] > rating[i-1], then candy[i] > candy[i-1].” But it cannot enforce the reverse: “if rating[i] > rating[i+1], then candy[i] > candy[i+1]” because you have not processed i+1 yet. Example: ratings [1, 3, 2]. Left-to-right gives [1, 2, 1]. Child at index 1 (rating 3) has more candy than child at index 2 (rating 2) — correct. But consider [1, 2, 3, 1]: left-to-right gives [1, 2, 3, 1], which is correct by luck. Now try [3, 2, 1]: left-to-right gives [1, 1, 1] — wrong, because each child should have more than the next.
  • Two passes work because: Pass 1 (left-to-right) enforces all “left neighbor” constraints. Pass 2 (right-to-left) enforces all “right neighbor” constraints. At each position, take max(left_pass[i], right_pass[i]) to satisfy both. Since each constraint is between adjacent pairs, and we enforce all leftward constraints in one pass and all rightward constraints in another, two passes cover all constraints.
  • Proof of minimality: Each child gets exactly the minimum candies needed to satisfy both neighbors. The max operation ensures both constraints are met, and neither pass assigns more than necessary (each starts from 1 and increments by exactly 1 per consecutive increase).
def candy(ratings):
    n = len(ratings)
    candies = [1] * n

    # Pass 1: left-to-right (enforce left-neighbor constraint)
    for i in range(1, n):
        if ratings[i] > ratings[i - 1]:
            candies[i] = candies[i - 1] + 1

    # Pass 2: right-to-left (enforce right-neighbor constraint)
    for i in range(n - 2, -1, -1):
        if ratings[i] > ratings[i + 1]:
            candies[i] = max(candies[i], candies[i + 1] + 1)

    return sum(candies)
Red flag answer: “Two passes handle both directions.” While true, this is not an explanation. The candidate should articulate what constraint each pass enforces and why two is sufficient (because each constraint involves exactly two adjacent elements, so directionality covers all cases). A worse red flag: suggesting three or more passes “to be safe” — this shows the candidate does not understand the constraint structure.Follow-ups:
  1. Can you solve this in a single pass using a different approach (not left-right greedy)? (Answer: yes, using a stack-based or “slope counting” approach that tracks ascending and descending sequences. It is O(n) time and O(1) extra space but significantly harder to implement correctly. The two-pass approach is almost always the better interview choice.)
  2. What if the constraint is “strictly more candies than BOTH neighbors if rating is higher than BOTH”? Does the two-pass approach still work? (Answer: yes. The two-pass approach already handles this. Each pass independently ensures the correct relationship with one neighbor, and the max combines them. The constraint “higher than both” is just the intersection of “higher than left” and “higher than right.”)
Difficulty: Staff-LevelWhat interviewers are really testing: Meta-reasoning and problem-solving judgment. At the staff level, interviewers want to see that you do not just know patterns — you have a decision framework for choosing between algorithmic paradigms. This is the question that reveals whether someone is a pattern-matcher or a problem-solver.Strong answer:
  • My decision tree:
    1. Can I make an irrevocable choice at each step without needing to reconsider? If yes, greedy is a candidate. If I might need to “undo” a choice later, that is a sign I need DP.
    2. Does the problem have optimal substructure? Both greedy and DP require this. If subproblems are not independent (solving one affects another), neither works cleanly.
    3. Does the problem have overlapping subproblems? If yes, DP. If subproblems are independent and each is solved exactly once, greedy.
    4. Can I construct a counterexample where the greedy choice fails? I spend 2-3 minutes trying to break the greedy approach with small inputs. If I find a counterexample, it is DP (or backtracking). If I cannot, greedy is likely correct.
    5. Can I sketch an exchange argument? If I can informally argue “swapping the greedy choice for any other choice does not improve the solution,” I am confident in greedy.
  • Practical heuristic: If the problem says “minimum number of X” and the choices are independent (like interval scheduling), try greedy first. If it says “minimum cost” and choices have dependencies (like knapsack), lean toward DP.
  • Real example of the process: “Minimum coins to make change.” Greedy thought: always pick the largest coin. Counterexample: coins [1, 3, 4], target 6. Greedy picks 4+1+1=3 coins. Optimal: 3+3=2 coins. Greedy fails. Conclusion: DP.
  • Another example: “Minimum number of intervals to remove for non-overlap.” Greedy thought: sort by end time, keep non-overlapping. Can I break it? No — the exchange argument works (earliest end frees the most space). Conclusion: greedy.
Red flag answer: “If it is an optimization problem, I try greedy first and if it does not work I switch to DP.” This is not a decision framework, it is trial-and-error. The candidate should describe a structured approach involving counterexample testing and property checking. Another red flag: “Greedy is for simple problems and DP is for hard problems” — complexity is not the distinguishing factor; structural properties are.Follow-ups:
  1. Can a problem have both a correct greedy solution and a correct DP solution? Give an example. (Answer: yes. Activity selection can be solved by DP with dp[i] = max activities from i..n, but greedy is simpler and faster. The DP is correct but unnecessary. Fractional Knapsack similarly has a DP formulation, but greedy is optimal and more efficient.)
  2. What about problems where greedy gives a good approximation but not the optimal answer? When would you use greedy anyway? (Answer: in practice, often. Set Cover is NP-hard, but the greedy algorithm gives a ln(n)-approximation. Traveling Salesman with nearest-neighbor heuristic gives a quick 2x-approximation. In production systems with latency constraints, a greedy approximation that runs in O(n log n) can be far more valuable than an exact DP solution that runs in O(2^n).)

Interview Deep-Dive

Strong Answer:
  • The exchange argument has three steps: (1) Assume an optimal solution O exists. (2) Show swapping one of O’s choices for greedy’s choice produces a solution at least as good. (3) Repeat until O matches greedy, proving greedy is optimal.
  • For interval scheduling: Greedy sorts by end time and picks the earliest-ending non-overlapping interval. Let g1 be greedy’s first pick and o1 be optimal’s first pick. Since g1 ends earliest, g1.end <= o1.end. Replace o1 with g1 in O. Since g1 ends no later than o1, it cannot overlap with O’s remaining intervals. So O’ = {g1, o2, o3, ...} is valid with the same size.
  • When the argument breaks: If the swap makes the solution worse (e.g., weighted intervals where g1 has lower weight), greedy fails for that problem.
Follow-up: What if intervals have weights and you want to maximize total weight of non-overlapping intervals?
  • Greedy fails. This is Weighted Job Scheduling requiring DP: dp[i] = max(dp[i-1], weight[i] + dp[last_compatible[i]]) with binary search. Time: O(n log n).
Strong Answer:
  • Fractional: You can take any fraction, so the greedy choice (best value-per-weight ratio) is always safe — you can adjust the fraction without blocking better options.
  • 0/1: Taking an entire item is irreversible and may consume capacity that a combination of smaller items would use more efficiently.
  • Counterexample: Capacity = 10. Items: A (weight 6, value 6), B (weight 5, value 5), C (weight 5, value 5). Greedy picks A (all ratios 1.0). Remaining capacity: 4. Total: 6. Optimal: B + C = 10. Greedy missed 40% of the optimal value.
  • The broken property: The greedy choice property requires that local decisions do not foreclose globally better solutions. Fractional knapsack has it; 0/1 does not.
Follow-up: Is there a special case of 0/1 Knapsack where greedy works?
  • Yes. When all items have the same weight or value is proportional to weight. In these degenerate cases, no trade-off exists between ratio and capacity.
Strong Answer:
  • Visualization: Place the most frequent task (A, appearing max_freq times) into a grid creating max_freq - 1 gaps of size n between consecutive A’s. The formula (max_freq - 1) * (n + 1) counts the slots for gaps plus A’s. Add count_of_max_freq_tasks for the final row.
  • Example: A A A B B B, n=2. Grid: A _ _ | A _ _ | A. Gaps = 2, each size 3. Final row: A and B (count_max = 2). Total = 2*3 + 2 = 8.
  • When the formula “breaks”: When there are enough distinct tasks to fill all idle slots. The answer becomes simply len(tasks). The actual answer is max(len(tasks), formula).
  • Why max() is needed: If you have 26 different tasks each appearing once and n=2, the formula gives a small number, but you need len(tasks) slots since no idle time is required.
Follow-up: What if tasks have different cooldowns — task A needs 3 cooldown, B needs 1? Does the formula still work?
  • No. The formula assumes uniform cooldown for all tasks. Variable cooldowns require the simulation approach (max-heap + cooldown queue), which generalizes naturally by setting each task’s available_time based on its own cooldown.

LeetCode Interview Q&A

Three full LeetCode-style questions you should be ready for. Each one comes with a strong-answer framework, the full code, senior follow-ups, common wrong answers, and further reading.
Strong Answer Framework
  1. State the structure: at each position you can jump up to nums[i] further. Goal: minimum jumps to reach n - 1.
  2. Greedy idea: treat each “jump” as a BFS layer. Inside the current layer, scan all reachable indices, tracking the farthest reach for the next jump.
  3. When you hit the right edge of the current layer, increment the jump count and commit the new layer’s right edge.
  4. Prove optimality with an exchange argument: any solution that jumps “less far” within a layer can be replaced by jumping to the maximum reach without increasing the count.
  5. State complexity: O(n) time, O(1) space.
Real LeetCode Walkthrough — LC 45
Python
def jump(nums):
    jumps = 0
    current_end = 0      # right edge of the current BFS layer
    farthest = 0         # max reach among indices in current layer
    for i in range(len(nums) - 1):
        farthest = max(farthest, i + nums[i])
        if i == current_end:
            jumps += 1
            current_end = farthest
    return jumps
Trace nums = [2, 3, 1, 1, 4]:
  • i=0: farthest = max(0, 0+2) = 2. i == current_end (0), so jumps=1, current_end=2.
  • i=1: farthest = max(2, 1+3) = 4.
  • i=2: farthest = 4. i == current_end (2), so jumps=2, current_end=4.
  • Loop ends (we exclude index 4). Answer: 2.
Senior follow-up 1: Can you do it in O(1) extra space and one pass?The shown code already is O(1) extra space and one pass. The trick is the layered scan — never re-examine an index, never store a queue.
Senior follow-up 2: What if some elements are negative — can you still go right?Negative jumps would let you go backward. Greedy still works if you redefine reach as max(farthest, i + nums[i]) only when nums[i] >= 0. With unrestricted negatives, the problem becomes BFS on a graph (each node has variable in/out-degree), and complexity rises to O(n^2) worst case.
Senior follow-up 3: What if you must minimize total fuel cost where each step has a per-jump cost?Greedy fails. This becomes a weighted shortest-path problem — use Dijkstra with priority queue, or DP where dp[i] = min cost to reach i. The example shows that “minimum count” and “minimum weighted sum” require different tools even on the same input shape.
Common Wrong Answers
  • Naive BFS with explicit queue: works but O(n) extra space and slower constants.
  • DP dp[i] = min(dp[j] + 1 for j in 0..i if j + nums[j] >= i): O(n^2), too slow for n = 10^4.
  • “Always jump to the farthest reachable index” (single greedy step): wrong — you must scan the entire current layer first.
Further Reading
  • LC 55 — Jump Game (the boolean prerequisite)
  • LC 1306 — Jump Game III (BFS on indices)
  • LC 1340 — Jump Game V (DP variant)
Strong Answer Framework
  1. Identify the bottleneck: the most frequent task max_freq cannot be packed tighter than every n + 1 slots.
  2. Lay out the most frequent task first: A _ _ A _ _ A for max_freq = 3, n = 2. That requires (max_freq - 1) * (n + 1) + 1 slots.
  3. Account for ties: if count_max tasks tie at max_freq, the final row holds count_max of them, so add count_max - 1 extra (or write the formula as (max_freq - 1) * (n + 1) + count_max).
  4. Edge case: if there are enough distinct tasks to fill all idle slots, the answer is just len(tasks). Take max(len(tasks), formula).
  5. Complexity: O(n) for the frequency count; the formula itself is O(1).
Real LeetCode Walkthrough — LC 621
Python
def leastInterval(tasks, n):
    from collections import Counter
    freq = Counter(tasks).values()
    max_freq = max(freq)
    count_max = sum(1 for f in freq if f == max_freq)
    return max(len(tasks), (max_freq - 1) * (n + 1) + count_max)
Trace tasks = ["A","A","A","B","B","B"], n = 2:
  • max_freq = 3, count_max = 2.
  • formula = (3 - 1) * (2 + 1) + 2 = 8.
  • len(tasks) = 6.
  • Answer: max(6, 8) = 8. Layout: A B _ A B _ A B.
Senior follow-up 1: How would you implement this with a max-heap simulation? When is that approach better?Push frequencies into a max-heap. Each “round” of n + 1 ticks: pop up to n + 1 tasks, decrement counts, push back the still-positive ones. Time O(total_tasks * log unique_tasks). Useful when you also need the actual schedule, not just the time — the formula gives only the count.
Senior follow-up 2: What if each task has its own cooldown — task A needs 3, task B needs 1?The closed-form formula breaks. Use a max-heap plus a cooldown queue keyed by available_time = current_time + task_cooldown. Pop the heap each tick, schedule the most frequent ready task, push it into the cooldown queue with its individual cooldown. O(total_tasks * log unique_tasks).
Senior follow-up 3: What if you must schedule the actual ordering, not just count idle slots?Greedy by always picking “the most frequent ready task that is not in cooldown” works — this is the heap simulation. Tie-break alphabetically or by remaining count to make output deterministic.
Common Wrong Answers
  • Sorting tasks by frequency and just spreading them out without the count_max term — fails on ties.
  • Forgetting max(len(tasks), formula) — fails when many distinct tasks fill the idle slots.
  • Simulating the heap correctly but adding +1 instead of accumulating idle time properly — subtle bug, hard to spot in interviews.
Further Reading
  • LC 358 — Rearrange String k Distance Apart (Task Scheduler with strings)
  • LC 767 — Reorganize String (special case n = 1)
  • LC 1405 — Longest Happy String (constrained construction)
Strong Answer Framework
  1. State the greedy: always pick the largest coin that fits.
  2. Construct the counterexample: coins [1, 3, 4], amount 6. Greedy gives 4+1+1 = 3 coins; optimal is 3+3 = 2 coins.
  3. Explain why greedy fails: picking the largest coin (4) consumes capacity that a combination of smaller coins (3+3) would use more efficiently. The “greedy choice property” does not hold.
  4. Prove DP works: dp[a] = min(dp[a - c] + 1) over all coins c at most a. Optimal substructure: the best way to make a ends in some coin c, after which you need the best way to make a - c.
  5. Complexity: O(amount * number_of_coins) time, O(amount) space.
Real LeetCode Walkthrough — LC 322 Coin Change
Python
def coinChange(coins, amount):
    INF = float('inf')
    dp = [INF] * (amount + 1)
    dp[0] = 0
    for a in range(1, amount + 1):
        for c in coins:
            if c <= a and dp[a - c] != INF:
                dp[a] = min(dp[a], dp[a - c] + 1)
    return dp[amount] if dp[amount] != INF else -1
For coins = [1, 3, 4], amount = 6: dp[0]=0, dp[1]=1, dp[2]=2, dp[3]=1, dp[4]=1, dp[5]=2, dp[6]=2. Answer: 2.
Senior follow-up 1: When does greedy actually work for coin change?When the coin set is “canonical” — US coins [1, 5, 10, 25] are canonical, as are most national currencies. The Pearson-Zhu condition gives an O(n^3) check: greedy is optimal iff for every counterexample candidate amount up to a certain bound, greedy matches DP. In interviews, just say “US coins are canonical so greedy works there, but the general problem is DP.”
Senior follow-up 2: Can you do it in O(amount * sqrt(amount))? In O(amount log amount)?No — the DP bound is tight under standard assumptions. The amount and coin count are both inputs; you cannot beat them in general. There are pseudo-polynomial improvements for specific coin structures, but the worst case stays O(amount * coins).
Senior follow-up 3: How do you reconstruct the actual coins used, not just the count?Keep a parent[a] = c table alongside dp[a], recording which coin was chosen to achieve dp[a]. To reconstruct: start at amount, repeatedly subtract parent[a] and prepend it to the answer until a == 0.
Common Wrong Answers
  • Insisting greedy works “with the right sort order.” It does not — the issue is structural, not about ordering.
  • Confusing LC 322 (minimum coins) with LC 518 (number of ways). Different DP recurrences (and LC 518 is the one where loop order matters: coins outer, amount inner, otherwise you count permutations instead of combinations).
  • Top-down without memoization: exponential and times out.
Further Reading
  • LC 518 — Coin Change II (counting variant)
  • LC 983 — Minimum Cost For Tickets (similar DP shape)
  • LC 322 — Coin Change (this problem)
Strong Answer Framework
  1. Goal: remove the minimum number of intervals so that the rest are non-overlapping. Equivalently: keep the maximum number of compatible intervals.
  2. Greedy: sort by end time, sweep left to right, keep an interval if its start is at least the previous kept end.
  3. Exchange argument: let g be greedy’s first kept interval (earliest end), o1 the first kept interval in any optimal solution. Since g.end <= o1.end, swapping o1 for g does not break compatibility with the remaining optimal picks. Repeat for every position.
  4. Complexity: O(n log n) for the sort, O(n) sweep.
Real LeetCode Walkthrough — LC 435
Python
def eraseOverlapIntervals(intervals):
    intervals.sort(key=lambda x: x[1])   # CRITICAL: by end time
    last_end = -float('inf')
    keep = 0
    for start, end in intervals:
        if start >= last_end:
            keep += 1
            last_end = end
    return len(intervals) - keep
Trace [[1,2],[2,3],[3,4],[1,3]]: sort by end -> [[1,2],[2,3],[1,3],[3,4]]. Keep [1,2] (end=2). [2,3] starts at 2, keep (end=3). [1,3] starts at 1, skip. [3,4] starts at 3, keep (end=4). keep = 3. Answer: 4 - 3 = 1.
Senior follow-up 1: What if intervals have weights and you want maximum total weight of non-overlapping intervals?Greedy fails. This is Weighted Interval Scheduling — DP with binary search. Sort by end time. dp[i] = max(dp[i-1], weight[i] + dp[p(i)]) where p(i) is the index of the latest interval that does not overlap with i. Find p(i) in O(log n) with binary search. Total O(n log n).
Senior follow-up 2: How does this relate to LC 452 Minimum Arrows to Burst Balloons?Same algorithm, different framing. Instead of “remove overlapping intervals,” LC 452 asks “fewest piercing points such that every balloon is hit.” The answer is the number of intervals kept by the greedy sweep. Code is essentially identical, just swap the return value.
Senior follow-up 3: What if intervals are open vs closed (start equals previous end — do they overlap)?Read the problem statement carefully. LC 435 treats [1,2] and [2,3] as non-overlapping (boundary touching is fine). LC 452 treats them as overlapping (the same balloon edge can be hit by one arrow). Adjust the comparison: start >= last_end for non-overlapping, start > last_end for strict separation.
Common Wrong Answers
  • Sorting by start time — a long interval starting early forecloses many short ones. Counterexample: [[1, 100], [2, 3], [3, 4]]. Sort-by-start keeps [1, 100] and removes the other two. Sort-by-end keeps [2, 3], [3, 4] and removes [1, 100].
  • Greedy “remove the longest interval at each step” — O(n^2 log n) and still wrong on edge cases.
  • DP from scratch — correct but unnecessarily complicated when greedy works in O(n log n).
Further Reading
  • LC 452 — Minimum Number of Arrows to Burst Balloons
  • LC 56 — Merge Intervals
  • LC 253 — Meeting Rooms II (interval partitioning, different greedy)