Skip to main content

Time Complexity Analysis

Time Complexity Growth Comparison

Why This Matters in Competitive Programming

In competitive programming, you have 2-3 seconds to process up to 10^6 or even 10^8 operations. Before writing a single line of code, you must estimate: “Will my solution pass?”
The Golden Rule: A modern computer performs about 10^8 simple operations per second. If your algorithm exceeds this for the given constraints, you’ll get TLE (Time Limit Exceeded).

Pattern Recognition: Constraint Analysis

Constraint Analysis Flowchart

The Constraint-to-Complexity Table

Memorize this table—it’s your first tool for problem-solving:
Constraint (n)Max ComplexityTypical Algorithms
n ≤ 10O(n!)Brute force permutations
n ≤ 20O(2^n)Bitmask DP, subset enumeration
n ≤ 100O(n^3)Floyd-Warshall, 3 nested loops
n ≤ 1,000O(n^2)Simple DP, nested loops
n ≤ 10^5O(n log n)Sorting, binary search, segment trees
n ≤ 10^6O(n)Two pointers, prefix sums, linear DP
n ≤ 10^8O(log n) or O(1)Binary search on answer, math
Quick Mental Check: Look at constraints FIRST. If n = 10^5 and you’re thinking of O(n^2), stop—you need a better approach.

Step-by-Step Complexity Analysis

Step 1: Count the Loops

// O(n) - Single loop
for (int i = 0; i < n; i++) {
    // O(1) work
}

// O(n^2) - Nested loops
for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
        // O(1) work
    }
}

// O(n * m) - Different bounds
for (int i = 0; i < n; i++) {
    for (int j = 0; j < m; j++) {
        // O(1) work
    }
}

Step 2: Recognize Logarithmic Patterns

// O(log n) - Halving each iteration
while (n > 0) {
    n /= 2;  // or n >>= 1
}

// O(n log n) - Loop with halving
for (int i = 0; i < n; i++) {
    int x = n;
    while (x > 0) {
        x /= 2;
    }
}

// O(log n) - Binary search
int lo = 0, hi = n;
while (lo < hi) {
    int mid = (lo + hi) / 2;
    if (check(mid)) hi = mid;
    else lo = mid + 1;
}

Step 3: Identify Amortized Complexity

Some algorithms look slow but are actually fast:
// Looks like O(n^2), actually O(n)
// Two pointers - each pointer moves at most n times total
int j = 0;
for (int i = 0; i < n; i++) {
    while (j < n && arr[j] < arr[i]) {
        j++;  // j increases, never resets
    }
}

// Looks like O(n^2), actually O(n log n)
// Merge sort - each level does O(n), log n levels
void mergeSort(vector<int>& arr, int l, int r) {
    if (l >= r) return;
    int mid = (l + r) / 2;
    mergeSort(arr, l, mid);
    mergeSort(arr, mid + 1, r);
    merge(arr, l, mid, r);  // O(n)
}

Common Complexity Patterns

Pattern 1: Nested Loops with Dependency

// What's the complexity?
for (int i = 0; i < n; i++) {
    for (int j = i; j < n; j++) {  // j starts from i
        // O(1) work
    }
}
O(n^2). The inner loop runs n + (n-1) + (n-2) + … + 1 = n(n+1)/2 times.

Pattern 2: Logarithmic Inner Loop

// What's the complexity?
for (int i = 1; i <= n; i++) {
    for (int j = 1; j <= n; j *= 2) {
        // O(1) work
    }
}
O(n log n). Outer loop: O(n). Inner loop: O(log n) since j doubles each time.

Pattern 3: Divisor Loop

// What's the complexity?
for (int i = 1; i <= n; i++) {
    for (int j = i; j <= n; j += i) {
        // O(1) work
    }
}
O(n log n). This is the harmonic series: n/1 + n/2 + n/3 + … + n/n = n * H_n ≈ n log n.

The CP Complexity Checklist

Before submitting, ask yourself:
1

Read Constraints

What are the input limits? (n, m, q, etc.)
2

Estimate Operations

Multiply your complexity by the largest input. Example: O(n^2) with n = 10^5 → 10^10 operations → TLE.
3

Consider Hidden Factors

Are you using map (log n per operation) or unordered_map (O(1) amortized)? Are you copying vectors/strings in loops?
4

Check Recursive Depth

Stack overflow can occur with recursion depth > 10^6. Use iterative or increase stack size.

Common Pitfalls

Pitfall 1: String Concatenation

// O(n^2) - Copying entire string each time!
string result = "";
for (int i = 0; i < n; i++) {
    result += s[i];  // Each += is O(length)
}

// O(n) - Use a vector or stringstream
vector<char> result;
for (int i = 0; i < n; i++) {
    result.push_back(s[i]);
}

Pitfall 2: Passing by Value

// O(n) per call - Copies the entire vector
void process(vector<int> arr) { ... }

// O(1) - Pass by reference
void process(vector<int>& arr) { ... }
void process(const vector<int>& arr) { ... }  // If not modifying

Pitfall 3: Map vs Unordered_Map

// O(log n) per operation
map<int, int> mp;
mp[x]++;  // O(log n)

// O(1) amortized per operation
unordered_map<int, int> mp;
mp[x]++;  // O(1) average
Codeforces Gotcha: unordered_map can be hacked with adversarial inputs causing O(n) per operation. Use custom hash or stick with map in critical problems.

Master Theorem (Quick Reference)

For recursive algorithms: T(n) = aT(n/b) + f(n)
ConditionComplexity
f(n) = O(n^c) where c < log_b(a)O(n^(log_b(a)))
f(n) = O(n^c) where c = log_b(a)O(n^c log n)
f(n) = O(n^c) where c > log_b(a)O(f(n))
Examples:
  • Merge Sort: T(n) = 2T(n/2) + O(n) → O(n log n)
  • Binary Search: T(n) = T(n/2) + O(1) → O(log n)

Practice Problems

Level 1: Constraint Analysis

  1. Given n ≤ 10^6, which complexities will pass in 2 seconds?
    • O(n) ✓
    • O(n log n) ✓
    • O(n^2) ✗

Level 2: Analyze This Code

int count = 0;
for (int i = 1; i <= n; i *= 2) {
    for (int j = 0; j < n; j++) {
        count++;
    }
}
O(n log n). The outer loop runs log n times, inner loop runs n times each.

Key Takeaways

Constraint First

Always read constraints before thinking about the algorithm.

10^8 Rule

About 10^8 simple operations per second.

Hidden Costs

Watch for string copies, map operations, and recursive calls.

Practice Estimation

With experience, you’ll estimate complexity in seconds.

Next Up

Chapter 2: Fast I/O & Debugging

Learn input/output optimizations that can mean the difference between AC and TLE.