Skip to main content
Process Subsystem - task_struct, fork, clone, and CFS scheduler

Process Subsystem Deep Dive

The process subsystem is the heart of Linux. Understanding task_struct, process creation, and the scheduler is essential for infrastructure engineers who need to debug performance issues and understand container behavior.
Interview Frequency: Very High
Key Topics: task_struct, clone/fork, CFS scheduler, CPU affinity
Time to Master: 14-16 hours

task_struct - The Process Descriptor

Every process and thread in Linux is represented by task_struct, one of the largest structures in the kernel (~6-8 KB).

task_struct Overview

Why is task_struct so large? Because it’s the kernel’s complete representation of a process. Every subsystem needs to track its own data about each process:
  • The scheduler needs priority and runtime statistics
  • Memory management needs page tables and memory limits
  • The filesystem needs current directory and open files
  • Security needs credentials and capabilities
  • Signals need pending signals and handlers
Rather than scattering this data across multiple global structures (which would require expensive lookups), Linux packs everything into one structure. This makes context switches faster - the kernel just needs to switch the current pointer to a different task_struct. Think of task_struct as the DNA of a process. It contains absolutely everything the kernel needs to know to manage that process. If it’s not in task_struct (or structures linked from it), the kernel doesn’t know about it. It acts as a “process control block” (PCB) and tracks:
  • State: Is it running? Waiting? Zombie?
  • Resources: What files are open? How much memory is used?
  • Identity: Who owns it? What group is it in?
  • Relationships: Who is the parent? Who are the children?
  • Scheduling: How much CPU time does it deserve?
// include/linux/sched.h (simplified)
struct task_struct {
    // Scheduler state
    volatile long state;           // TASK_RUNNING, TASK_INTERRUPTIBLE, etc.
    unsigned int flags;            // PF_EXITING, PF_KTHREAD, etc.
    int prio, static_prio, normal_prio;
    struct sched_entity se;        // CFS scheduling entity
    struct sched_rt_entity rt;     // Real-time scheduling entity
    unsigned int policy;           // SCHED_NORMAL, SCHED_FIFO, etc.
    
    // CPU and preemption
    int on_cpu;                    // Currently running on CPU
    int on_rq;                     // On runqueue
    cpumask_t cpus_mask;           // Allowed CPUs
    
    // Identity
    pid_t pid;                     // Process ID
    pid_t tgid;                    // Thread group ID (= pid of main thread)
    
    // Process relationships
    struct task_struct *real_parent;  // Real parent
    struct task_struct *parent;       // Current parent (may differ due to ptrace)
    struct list_head children;        // List of children
    struct list_head sibling;         // Linkage in parent's children list
    struct task_struct *group_leader; // Thread group leader
    
    // Memory management
    struct mm_struct *mm;          // Memory descriptor (NULL for kernel threads)
    struct mm_struct *active_mm;   // Active memory descriptor
    
    // Filesystem
    struct fs_struct *fs;          // Filesystem info (cwd, root)
    struct files_struct *files;    // Open file descriptors
    
    // Namespaces
    struct nsproxy *nsproxy;       // Namespace proxy
    
    // Credentials
    const struct cred *cred;       // Credentials (uid, gid, capabilities)
    
    // Signals
    struct signal_struct *signal;
    struct sighand_struct *sighand;
    sigset_t blocked;              // Blocked signals
    
    // Timing
    u64 utime, stime;              // User and system time
    u64 start_time;                // Process start time
    
    // And much more...
};
Linux task_struct Structure

Why is task_struct so complex?

The task_struct is complex because it connects the process to every other subsystem in the kernel. It’s the hub that links:
  • Virtual Memory: via mm_struct
  • File Systems: via files_struct and fs_struct
  • Scheduling: via sched_entity
  • Signals: via signal_struct
This design allows the kernel to quickly access any resource related to a running process.

Process States

Linux Process Lifecycle

Viewing task_struct Fields

# View process info via /proc
cat /proc/self/status
cat /proc/self/stat
cat /proc/self/maps

# Using Python for exploration
python3 -c "
import os
pid = os.getpid()
with open(f'/proc/{pid}/stat') as f:
    fields = f.read().split()
    print(f'PID: {fields[0]}')
    print(f'State: {fields[2]}')
    print(f'PPID: {fields[3]}')
    print(f'Threads: {fields[19]}')
"

Process Creation

Understanding Selective Sharing

Before we dive into clone(), let’s understand the core concept: selective sharing. When you create a new process, you have a choice for each resource:
  1. Copy it - Child gets its own independent copy (traditional fork)
  2. Share it - Child uses the same resource as parent (threads)
This is powerful because:
  • Threads need to share memory but can have separate stacks
  • Containers need separate namespaces but can share the filesystem
  • Fork+exec needs a temporary process that immediately replaces itself
Linux provides one system call - clone() - that lets you choose exactly what to share and what to copy.

The clone() System Call

The clone() system call is the Swiss Army knife of process creation. Unlike fork() which copies everything, clone() allows you to selectively choose exactly what to share and what to copy. All process/thread creation goes through clone():
long clone(unsigned long flags,
           void *child_stack,
           int *ptid,
           int *ctid,
           unsigned long tls);

Clone Flags - The Sharing Knobs

The concept: Each flag controls whether to share or copy a specific resource. No flag = copy (fork behavior). With flag = share (thread behavior).
FlagEffect
CLONE_VMShare memory space (threads)
CLONE_FSShare filesystem info
CLONE_FILESShare file descriptors
CLONE_SIGHANDShare signal handlers
CLONE_THREADSame thread group (share PID)
CLONE_NEWNSNew mount namespace
CLONE_NEWPIDNew PID namespace
CLONE_NEWNETNew network namespace
CLONE_NEWUSERNew user namespace

fork vs vfork vs clone vs pthread_create

fork vs clone vs pthread_create

Copy-on-Write Implementation

┌─────────────────────────────────────────────────────────────────────────────┐
│                         COPY-ON-WRITE (COW)                                  │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                              │
│  Before fork():                                                              │
│  ┌─────────────────┐     ┌─────────────────────────────────────┐           │
│  │     Parent      │────→│  Memory Pages (owned by parent)     │           │
│  │   Page Table    │     │  [Page A] [Page B] [Page C]         │           │
│  └─────────────────┘     └─────────────────────────────────────┘           │
│                                                                              │
│  After fork() (COW setup):                                                  │
│  ┌─────────────────┐     ┌─────────────────────────────────────┐           │
│  │     Parent      │────→│  Shared Pages (read-only for both)  │           │
│  │   Page Table    │     │  [Page A] [Page B] [Page C]         │           │
│  │  (read-only)    │     │     ↑                               │           │
│  └─────────────────┘     └─────│───────────────────────────────┘           │
│                                │                                            │
│  ┌─────────────────┐           │                                            │
│  │     Child       │───────────┘                                            │
│  │   Page Table    │                                                        │
│  │  (read-only)    │                                                        │
│  └─────────────────┘                                                        │
│                                                                              │
│  Child writes to Page B:                                                    │
│  ┌─────────────────┐     ┌─────────────────────────────────────┐           │
│  │     Parent      │────→│  [Page A] [Page B] [Page C]         │           │
│  │   Page Table    │     └─────────────────────────────────────┘           │
│  └─────────────────┘                                                        │
│                                                                              │
│  ┌─────────────────┐     ┌─────────────────────────────────────┐           │
│  │     Child       │────→│  [Page A'] [Page B (copy)] [Page C']│           │
│  │   Page Table    │     └─────────────────────────────────────┘           │
│  └─────────────────┘     (A' and C' still share with parent)               │
│                                                                              │
└─────────────────────────────────────────────────────────────────────────────┘

do_fork Internals

// Simplified kernel/fork.c
long do_fork(unsigned long clone_flags, ...)
{
    struct task_struct *p;
    
    // 1. Allocate new task_struct
    p = dup_task_struct(current);
    
    // 2. Copy based on clone flags
    if (clone_flags & CLONE_VM)
        // Share mm_struct (threads)
    else
        // dup_mm - copy with COW
    
    if (clone_flags & CLONE_FILES)
        // Share files_struct
    else
        // dup_fd - copy file descriptors
    
    // 3. Assign PID
    p->pid = alloc_pid(p->nsproxy->pid_ns);
    
    // 4. Set up scheduler
    sched_fork(clone_flags, p);
    
    // 5. Wake up new task
    wake_up_new_task(p);
    
    return p->pid;
}

The CFS Scheduler

The Completely Fair Scheduler (CFS) is the default scheduler for normal processes.

CFS Core Concept: Virtual Runtime

CFS Scheduler Decision Flow The Completely Fair Scheduler (CFS) uses the concept of virtual runtime (vruntime) to ensure fairness. The goal is simple: model an “ideal multi-tasking CPU” where every process gets an equal share of processor power. How it works:
  1. Tracking Runtime: As a task runs, its vruntime increases.
  2. Weighting: Tasks with higher priority (lower nice value) accumulate vruntime more slowly, allowing them to run longer for the same “virtual” cost.
  3. Selection: The scheduler always picks the task with the lowest vruntime (the one that has been treated most unfairly so far).
┌─────────────────────────────────────────────────────────────────────────────┐
│                          CFS VIRTUAL RUNTIME                                 │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                              │
│   The idea: Track how much CPU time each task has had, always run           │
│   the task with the LEAST virtual runtime.                                  │
│                                                                              │
│   Virtual runtime accumulation:                                             │
│   vruntime += delta_exec * (NICE_0_WEIGHT / task_weight)                    │
│                                                                              │
│   - Nice 0 task: vruntime = actual runtime                                  │
│   - Nice -20 task: vruntime accumulates slower (runs more)                  │
│   - Nice +19 task: vruntime accumulates faster (runs less)                  │
│                                                                              │
│   Example: 3 tasks with same vruntime at start                              │
│                                                                              │
│   Time ─────────────────────────────────────────────────────────→          │
│                                                                              │
│   Task A (nice 0):   |====|    |====|    |====|    vruntime = 12ms         │
│   Task B (nice -10): |========|    |========|      vruntime = 12ms         │
│   Task C (nice +10): |==|    |==|    |==|    |==|  vruntime = 12ms         │
│                                                                              │
│   All have same vruntime, but B ran more actual time!                       │
│                                                                              │
└─────────────────────────────────────────────────────────────────────────────┘

CFS Red-Black Tree

Tasks are organized in a red-black tree sorted by vruntime:


![CFS Red-Black Tree](/images/courses/linux-cfs-rb-tree.svg)

### CFS Scheduler Entity

```c
// include/linux/sched.h
struct sched_entity {
    struct load_weight load;        // Weight based on nice value
    struct rb_node run_node;        // RB-tree node
    unsigned int on_rq;             // On runqueue?
    
    u64 exec_start;                 // When started running
    u64 sum_exec_runtime;           // Total runtime
    u64 vruntime;                   // Virtual runtime
    u64 prev_sum_exec_runtime;      // For preemption check
    
    // Group scheduling
    struct sched_entity *parent;
    struct cfs_rq *cfs_rq;         // CFS runqueue this belongs to
    struct cfs_rq *my_q;           // CFS runqueue owned (for groups)
};

CFS Tuning Parameters

# Scheduler tuning via sysctl
sysctl -a | grep sched

# Key parameters:
# sched_latency_ns = 6000000 (6ms)
#   Target latency - how long before all tasks run once
#
# sched_min_granularity_ns = 750000 (0.75ms)  
#   Minimum time slice
#
# sched_wakeup_granularity_ns = 1000000 (1ms)
#   How much better vruntime needed to preempt

Real-Time Scheduling

For tasks that need guaranteed timing:

Scheduling Policies

PolicyDescriptionPriority Range
SCHED_NORMAL (SCHED_OTHER)CFS, time-sharingNice -20 to +19
SCHED_FIFOReal-time, run until yield1-99
SCHED_RRReal-time, round-robin1-99
SCHED_DEADLINEEarliest deadline firstN/A
SCHED_BATCHCPU-intensive, lower latencyNice -20 to +19
SCHED_IDLEOnly when nothing else to runN/A

Setting Scheduling Policy

#include <sched.h>

// Set FIFO policy with priority 50
struct sched_param param;
param.sched_priority = 50;
sched_setscheduler(pid, SCHED_FIFO, &param);

// From command line:
// chrt -f 50 ./my_program
// chrt -r 50 ./my_program  # Round-robin

SCHED_DEADLINE (EDF)

// Earliest Deadline First - best real-time guarantee
struct sched_attr attr = {
    .size = sizeof(attr),
    .sched_policy = SCHED_DEADLINE,
    .sched_runtime = 10000000,   // 10ms of runtime
    .sched_deadline = 30000000,  // 30ms deadline
    .sched_period = 30000000,    // 30ms period
};
syscall(SYS_sched_setattr, 0, &attr, 0);

CPU Affinity and Isolation

Critical for performance-sensitive applications.

CPU Affinity

#define _GNU_SOURCE
#include <sched.h>

// Set CPU affinity
cpu_set_t mask;
CPU_ZERO(&mask);
CPU_SET(0, &mask);  // Only CPU 0
CPU_SET(2, &mask);  // And CPU 2
sched_setaffinity(pid, sizeof(mask), &mask);

// Get CPU affinity
sched_getaffinity(pid, sizeof(mask), &mask);
# Command line
taskset -c 0,2 ./my_program
taskset -p 1234  # Show affinity of PID 1234

CPU Isolation

# Kernel boot parameter: isolate CPUs from scheduler
# In /etc/default/grub:
GRUB_CMDLINE_LINUX="isolcpus=2,3 nohz_full=2,3 rcu_nocbs=2,3"

# isolcpus - No normal tasks scheduled on these CPUs
# nohz_full - No timer ticks on these CPUs (when single task)
# rcu_nocbs - RCU callbacks offloaded from these CPUs

NUMA Considerations

# View NUMA topology
numactl --hardware

# Run on specific NUMA node
numactl --cpunodebind=0 --membind=0 ./my_program

# View task's NUMA stats
cat /proc/<pid>/numa_maps

Context Switching

Why Context Switches Are Expensive

Context switches are one of the most expensive operations in an operating system. Here’s why:
  1. Direct costs (~2-5 μs):
    • Saving/restoring CPU registers
    • Switching page tables (CR3 register)
    • TLB flush (thousands of cached address translations lost)
  2. Indirect costs (~10-100 μs):
    • Cache pollution: New process brings different data into CPU caches, evicting the previous process’s data
    • Cache misses: After switch, almost every memory access misses cache initially
    • Branch predictor reset: CPU’s prediction tables are now wrong
Real-world impact: A process doing 10,000 context switches/second spends 2-5% of CPU time just on switching overhead, plus 10-50% on cache misses. This is why:
  • Threads are cheaper than processes (no TLB flush if same address space)
  • CPU affinity matters (keeps cache warm)
  • Reducing context switches improves performance
Understanding context switch overhead:


![Context Switch Flow](/images/courses/linux-context-switch.svg)

### Measuring Context Switch Overhead

```c
// Measure using perf
// perf stat -e context-switches,cpu-migrations ./my_program

// Or programmatically
#include <unistd.h>
#include <sys/resource.h>

struct rusage usage;
getrusage(RUSAGE_SELF, &usage);
printf("Voluntary context switches: %ld\n", usage.ru_nvcsw);
printf("Involuntary context switches: %ld\n", usage.ru_nivcsw);

Lab Exercises

Objective: Understand process structure through /proc
# Start a long-running process
sleep 1000 &
PID=$!

# View status
cat /proc/$PID/status

# View memory maps
cat /proc/$PID/maps

# View file descriptors
ls -la /proc/$PID/fd/

# View namespace membership
ls -la /proc/$PID/ns/

# View cgroup membership
cat /proc/$PID/cgroup

# View scheduler info
cat /proc/$PID/sched

kill $PID
Objective: Understand clone flag effects
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <sched.h>
#include <sys/wait.h>
#include <unistd.h>

#define STACK_SIZE (1024 * 1024)

int global_var = 0;

int child_func(void *arg) {
    char *mode = (char *)arg;
    global_var = 42;
    printf("Child (%s): global_var = %d, pid = %d\n", 
           mode, global_var, getpid());
    sleep(1);
    return 0;
}

void test_clone(int flags, const char *name) {
    char *stack = malloc(STACK_SIZE);
    char *stack_top = stack + STACK_SIZE;
    
    global_var = 0;
    
    int pid = clone(child_func, stack_top, flags | SIGCHLD, (void*)name);
    if (pid == -1) {
        perror("clone");
        return;
    }
    
    waitpid(pid, NULL, 0);
    printf("Parent after %s: global_var = %d\n\n", name, global_var);
    free(stack);
}

int main() {
    // Test without CLONE_VM (like fork - separate memory)
    test_clone(0, "no CLONE_VM");
    
    // Test with CLONE_VM (like threads - shared memory)
    test_clone(CLONE_VM, "CLONE_VM");
    
    return 0;
}
gcc clone_test.c -o clone_test
./clone_test
Objective: Analyze CFS behavior
# View scheduler statistics
cat /proc/schedstat

# View per-task scheduler info
cat /proc/self/sched

# Monitor context switches
perf stat -e context-switches,cpu-migrations -- sleep 1

# Trace scheduler events
sudo perf sched record -- sleep 1
sudo perf sched latency

# View run queue length
sar -q 1 5
# Python script to visualize scheduling
import os
import time

def get_sched_info(pid):
    with open(f'/proc/{pid}/sched') as f:
        return f.read()

pid = os.getpid()
for i in range(5):
    print(f"\n--- Iteration {i} ---")
    # Do some work
    sum(range(10000000))
    info = get_sched_info(pid)
    for line in info.split('\n')[:10]:
        print(line)
    time.sleep(0.1)
Objective: Control process placement
#define _GNU_SOURCE
#include <stdio.h>
#include <sched.h>
#include <unistd.h>

void print_affinity() {
    cpu_set_t mask;
    CPU_ZERO(&mask);
    sched_getaffinity(0, sizeof(mask), &mask);
    
    printf("CPU affinity: ");
    for (int i = 0; i < CPU_SETSIZE; i++) {
        if (CPU_ISSET(i, &mask))
            printf("%d ", i);
    }
    printf("\n");
}

int main() {
    printf("Before: ");
    print_affinity();
    
    // Set affinity to CPU 0 only
    cpu_set_t mask;
    CPU_ZERO(&mask);
    CPU_SET(0, &mask);
    sched_setaffinity(0, sizeof(mask), &mask);
    
    printf("After:  ");
    print_affinity();
    
    // Do some work on CPU 0
    volatile long sum = 0;
    for (long i = 0; i < 100000000; i++)
        sum += i;
    
    printf("Work done on CPU %d\n", sched_getcpu());
    
    return 0;
}

Interview Questions

Answer:Both create new processes, but with different sharing:fork():
  • Creates independent process
  • Copy-on-write memory (efficient)
  • Copies file descriptors (but shares underlying files)
  • New PID, new memory space
  • Internally: clone(SIGCHLD, 0)
clone() (for threads):
  • Can share memory (CLONE_VM)
  • Can share file descriptors (CLONE_FILES)
  • Can share filesystem info (CLONE_FS)
  • Same PID, different TID (with CLONE_THREAD)
  • Internally: Many flags control sharing
Key insight: fork() is just clone() with specific flags. Threads are processes that share more resources.
Answer:CFS tracks virtual runtime for each task:
  1. Virtual runtime accumulation:
    • Each task accumulates vruntime based on actual runtime
    • Higher nice value = faster vruntime accumulation (runs less)
    • Lower nice value = slower accumulation (runs more)
  2. Scheduling decision:
    • Tasks stored in RB-tree sorted by vruntime
    • Always pick task with lowest vruntime (leftmost node)
    • O(1) to find next task, O(log n) to reinsert
  3. Fairness mechanism:
    • New tasks start with min_vruntime of runqueue
    • Sleeping tasks catch up gradually (capped)
    • Result: All tasks get proportional CPU time
Example:
  • Two tasks with nice 0: each gets 50% CPU
  • Nice 0 + nice 5: ~75%/25% split
  • Nice 0 + nice -5: ~25%/75% split
Answer:What it is: Dedicating CPUs to specific workloads, preventing the kernel from scheduling other tasks on them.Methods:
  • isolcpus=N,M - Boot parameter, removes CPUs from scheduler
  • nohz_full=N,M - Disables timer ticks (reduces jitter)
  • rcu_nocbs=N,M - Offloads RCU callbacks
  • cpuset cgroup - Runtime control
Use cases:
  1. Low-latency trading: Sub-microsecond response needed
  2. Real-time systems: Guaranteed timing
  3. Observability agents: Minimal interference with workloads
  4. DPDK/network processing: Polling without interrupts
Trade-offs:
  • Wasted CPU if isolated tasks not busy
  • Complexity in managing affinity
  • Some kernel work still interrupts (hard IRQs)
Answer:Overhead sources:
  1. Direct costs (~1-2 μs):
    • Save/restore registers: ~100 cycles
    • Switch page tables: ~100 cycles
    • TLB flush (without PCID): ~1000 cycles
  2. Indirect costs (~1-10 μs):
    • Cache misses (cold cache): Major impact
    • TLB misses after flush
    • Pipeline stalls
Minimization strategies:
  1. Reduce switches:
    • Use async I/O (io_uring, epoll)
    • Batch operations
    • Increase scheduler timeslice
  2. Reduce switch cost:
    • CPU affinity (keep task on same CPU = warm cache)
    • PCID (Process Context IDs - avoid TLB flush)
    • Kernel threads vs processes (share address space)
  3. Measurement:
    • perf stat -e context-switches
    • /proc/<pid>/status Voluntary/Nonvoluntary switches
    • vmstat for system-wide

Key Takeaways

task_struct

The central data structure for every process/thread, containing all state

Clone Flexibility

clone() flags control exactly what’s shared between parent and child

CFS Fairness

Virtual runtime ensures proportional CPU allocation based on priority

CPU Control

Affinity and isolation are essential for performance-critical workloads

Next: Memory Management Internals →