This guide covers the most frequently asked OS interview questions at top tech companies, organized by topic with detailed answers and common follow-ups.
Target: Senior/Staff Engineering Interviews Companies: FAANG, startups, systems companies Preparation Time: 20-30 hours across all topics
1. What's the difference between a process and a thread?
Answer:
Aspect
Process
Thread
Memory
Separate address space
Shared address space
Creation
Expensive (fork)
Cheap
Context switch
Expensive (TLB flush)
Cheap
Communication
IPC needed
Shared memory
Crash impact
Isolated
Affects all threads
When to use each:
Processes: Isolation needed (security), different languages, crash isolation
Threads: Shared state, low latency communication, same codebase
Follow-up: “How does fork() work?”
Creates copy of parent’s address space (copy-on-write)
Child gets PID, returns 0 from fork()
Parent gets child’s PID, returns child PID from fork()
2. What happens when you run a program?
Answer (step-by-step):
Shell parses command, finds executable in PATH
fork(): Create child process
Copy page tables (COW)
Copy file descriptors
New PID, same code
execve(): Replace child with new program
Load ELF headers
Set up new address space
Map code, data sections
Set up stack with args/env
Jump to _start (entry point)
Dynamic linking: ld.so loads shared libraries
main(): C runtime calls your main()
exit(): Cleanup, return status to parent
wait(): Parent reaps child, gets exit status
Key syscalls: fork, execve, wait4, exit_group
3. Explain context switching
Answer:What is saved:
CPU registers (general purpose, PC, SP)
Floating point/SIMD registers
Kernel stack pointer
Page table pointer (CR3 on x86)
Steps:
Copy
1. Save current task's registers to its task_struct2. Save stack pointer3. Select next task (scheduler)4. Restore next task's stack pointer5. Restore next task's registers6. If different process: switch page tables (TLB flush)7. Return to user mode
Cost:
Thread switch: ~1-2 μs
Process switch: ~5-10 μs (TLB flush)
Why it matters: Too many context switches = poor performance
// BAD: Thread 1 locks A then B, Thread 2 locks B then A// GOOD: Always lock in order (A before B)void transfer(Account *from, Account *to) { Account *first = (from < to) ? from : to; Account *second = (from < to) ? to : from; lock(first); lock(second); // transfer... unlock(second); unlock(first);}
8. Explain mutex vs semaphore vs condition variable
Answer:
Primitive
Purpose
Count
Use Case
Mutex
Mutual exclusion
0/1
Protect critical section
Semaphore
Resource counting
0-N
Limit concurrent access
Cond Var
Wait for condition
N/A
Producer-consumer
Mutex:
Copy
pthread_mutex_lock(&mutex);// Critical section - only one threadpthread_mutex_unlock(&mutex);
Semaphore:
Copy
sem_wait(&sem); // Decrement, block if 0// Access resource (up to N concurrent)sem_post(&sem); // Increment
Condition Variable:
Copy
pthread_mutex_lock(&mutex);while (!condition) { pthread_cond_wait(&cond, &mutex); // Releases mutex while waiting}// Condition is now truepthread_mutex_unlock(&mutex);// In another thread:pthread_mutex_lock(&mutex);condition = true;pthread_cond_signal(&cond); // Wake one waiterpthread_mutex_unlock(&mutex);
Key insight: Condition variables are always used with a mutex and a predicate (while loop).
9. What is a spinlock? When to use it?
Answer:Spinlock: Busy-wait for lock instead of sleeping
Copy
while (atomic_test_and_set(&lock) == 1) { // Spin! CPU is busy waiting}// Have lock// ... critical section ...atomic_clear(&lock);
Lower vruntime = hasn’t had fair share = run it next
Red-black tree for O(log n) task selection
Nice values adjust time slices, not priority
Key metrics:
Throughput: Jobs completed per time
Turnaround: Submit to completion
Wait time: Time in ready queue
Response time: Submit to first run
11. What is priority inversion? How do you solve it?
Answer:Problem:
Copy
Low priority task (L) holds lockHigh priority task (H) needs lock → blocksMedium priority task (M) preempts LResult: H waits for M, even though H > M priority!
Solutions:
Priority Inheritance:
L temporarily gets H’s priority while holding lock
// Track requests in time window// More accurate for bursts but more memorytypedef struct { int *requests; // Circular buffer int window_size; // Seconds int max_requests; struct timespec window_start;} sliding_window_t;
$ gdb -p <pid>(gdb) thread apply all bt# Or without GDB:$ pstack <pid>
Check for deadlock:
Copy
# Look for mutex wait in stack traces# Multiple threads waiting on locks = deadlock candidate# Check lock order: Does thread 1 hold A, wait for B?# Does thread 2 hold B, wait for A?
Check for infinite loop:
Copy
$ top -H -p <pid># Look for thread at 100% CPU$ perf top -p <pid># What function is hot?
Check for I/O block:
Copy
$ cat /proc/<pid>/stack# Look for I/O wait syscalls$ strace -p <pid># What syscall is it stuck on?