Documentation Index
Fetch the complete documentation index at: https://resources.devweekends.com/llms.txt
Use this file to discover all available pages before exploring further.
The CP Toolkit
This is your battle-tested arsenal of code snippets. Every template here has been used in actual contests. Copy, paste, adapt, and win.Lightning Round Snippets
Fast I/O Template
Math Snippets
GCD and LCM
Modular Arithmetic
nCr with Precomputation
Prime Check and Sieve
Array/Vector Snippets
Prefix Sum
Coordinate Compression
Kadane’s Algorithm (Max Subarray Sum)
Count Inversions (Merge Sort)
Binary Search Templates
Lower Bound (First ≥ target)
Upper Bound (First > target)
Binary Search on Answer
Binary Search on Real Numbers
Graph Snippets
Graph Representation
DFS
BFS (Shortest Path Unweighted)
Dijkstra’s Algorithm
Union-Find (DSU)
Topological Sort (Kahn’s Algorithm)
Tree Snippets
Tree Diameter (Two BFS)
LCA with Binary Lifting
DP Snippets
0/1 Knapsack
Longest Increasing Subsequence (LIS)
Coin Change (Min Coins)
Edit Distance
Data Structure Snippets
Segment Tree (Point Update, Range Query)
Binary Indexed Tree (Fenwick Tree)
String Snippets
String Hashing
KMP Pattern Matching
Bit Manipulation Snippets
Interview Deep-Dive
Compare the Fenwick Tree (BIT) and Segment Tree for range sum queries with point updates. When would you choose one over the other?
Compare the Fenwick Tree (BIT) and Segment Tree for range sum queries with point updates. When would you choose one over the other?
Strong Answer:
- Both support O(log n) point updates and O(log n) range queries. The key differences are in constant factor, code complexity, and extensibility.
- Fenwick Tree advantages: 2-3x faster constant factor (uses simple bit manipulation instead of recursive tree traversal), half the memory (n+1 array vs 4n array), and much shorter code (15 lines vs 40+ lines). For pure range-sum with point updates, BIT is strictly superior.
- Segment Tree advantages: supports any associative operation (min, max, GCD, not just sum), supports range updates with lazy propagation, supports more complex queries (find kth element, count elements in range satisfying condition). BIT cannot do lazy propagation natively.
- Decision framework: if the problem is range sum with point updates, use BIT. If the problem involves range min/max, range updates, or any operation beyond addition, use segment tree. If the problem requires persistent data structure (historical queries), use persistent segment tree.
- In a contest, I default to BIT for sum queries because it is faster to write and has fewer bugs. I switch to segment tree only when the problem specifically requires range updates or non-additive operations.
i += i & (-i)). For prefix query up to index i: accumulate tree[i], then move to the parent by removing the lowest set bit (i -= i & (-i)). The expression i & (-i) isolates the lowest set bit because -i in two’s complement is the bitwise NOT of i, plus 1. This makes each operation visit at most O(log n) nodes, and the constant factor is a single bit operation per step.Walk through the KMP algorithm. What is the failure function, why is it needed, and what is the total time complexity?
Walk through the KMP algorithm. What is the failure function, why is it needed, and what is the total time complexity?
Strong Answer:
- KMP solves exact string matching in O(n + m) where n is the text length and m is the pattern length. The naive approach is O(nm) because after each mismatch, it restarts matching from the beginning of the pattern. KMP avoids this by precomputing how much of the pattern can be reused after a mismatch.
- The failure function (also called LPS array for “longest proper prefix which is also suffix”) for each position i in the pattern stores the length of the longest proper prefix of pattern[0..i] that is also a suffix. This tells you: after a mismatch at position j in the pattern, you can skip ahead to position lps[j-1] instead of starting over at 0.
- Building the LPS array is itself an elegant use of two pointers: a “current position” pointer and a “matched length” pointer. When characters match, both advance. When they do not match, the matched length pointer falls back to lps[matched_length - 1]. This is O(m) because each pointer advances at most m times total.
- The matching phase works identically: maintain a pointer in the text and a pointer in the pattern. On match, advance both. On mismatch, fall back the pattern pointer using the LPS array. The text pointer never moves backward, so total work is O(n).
- Total: O(n + m) time, O(m) space for the LPS array.
Explain the difference between 0/1 Knapsack and Unbounded Knapsack in terms of the DP iteration order. Why does reversing the inner loop direction change which problem you are solving?
Explain the difference between 0/1 Knapsack and Unbounded Knapsack in terms of the DP iteration order. Why does reversing the inner loop direction change which problem you are solving?
Strong Answer:
- Both use a 1D DP array where dp[w] = maximum value achievable with capacity w. The difference is in the inner loop direction, and understanding why requires understanding what values dp[w] depends on at each step.
- For 0/1 Knapsack (each item used at most once): iterate w from W down to weight[i]. When computing dp[w], we read dp[w - weight[i]], which has NOT yet been updated in this iteration of the outer loop. So dp[w - weight[i]] reflects a state where item i has not been used — exactly what we want for 0/1 knapsack.
- For Unbounded Knapsack (unlimited copies): iterate w from weight[i] up to W. When computing dp[w], we read dp[w - weight[i]], which HAS already been updated in this iteration. So dp[w - weight[i]] might already include item i — which is fine because we can use it again.
- The loop direction controls whether the current item’s contribution is counted once or multiple times. Reversing the loop is a single-line change that switches between two fundamentally different problems. This is one of the most elegant tricks in DP.
- Memory optimization: both use O(W) space instead of the naive 2D O(nW) table. The 2D table makes the dependency structure explicit (dp[i][w] depends on dp[i-1][w] and dp[i-1][w-weight[i]] for 0/1). The 1D optimization collapses the item dimension by processing items in order and using the loop direction to implicitly encode whether the current item has been used.
max with + in the transition. For counting ways to make exact change: dp[0] = 1 (one way to make amount 0: use nothing), and for each coin, dp[w] += dp[w - coin]. The same loop direction logic applies: reverse loop for “each coin used at most once” (combination sum without repetition), forward loop for “unlimited coins” (combination sum with repetition). The initialization changes from dp[w] = 0 to dp[0] = 1, and the transition changes from max to sum.