OPERATING SYSTEM

2. PROCESS MANAGEMENT

Process Management in Operating Systems

Process management is one of the core functions of an operating system (OS). It involves the handling of processes, including their creation, scheduling, and termination. Processes are fundamental to any operating system because they represent the execution of a program, which involves the use of system resources such as CPU, memory, and I/O devices.

1. What is a Process?

A process is a program in execution. It consists of:

  • Code: The instructions that need to be executed.
  • Data: Variables and data that the program manipulates.
  • Program Counter: A register that holds the address of the next instruction.
  • Stack: Holds temporary data such as method/function parameters, return addresses, and local variables.
  • Heap: Dynamic memory that is used during the process's execution.

2. Process State

A process can be in one of the following states:

  • New: The process is being created.
  • Ready: The process is waiting to be assigned to a processor.
  • Running: Instructions are being executed.
  • Waiting: The process is waiting for some event (such as I/O completion) to occur.
  • Terminated: The process has finished execution.

Illustration: Process State Transition Diagram

 
New -------> Ready | Running | Waiting <----> Running | Terminated

3. Process Control Block (PCB)

Every process is represented by a Process Control Block (PCB) in the OS. The PCB contains information about the process including:

  • Process ID (PID): Unique identifier for the process.
  • Process State: The current state of the process.
  • Program Counter: The address of the next instruction to execute.
  • CPU Registers: Values of the CPU registers for the process.
  • Memory Management Information: Information about the memory allocated to the process.
  • I/O Status: List of I/O devices allocated to the process.
  • Accounting Information: CPU usage, process start time, etc.

4. Process Scheduling

Scheduling is the method by which processes are given access to system resources, primarily the CPU. The scheduler determines which process runs at a given time.

  • Scheduler Types:

    • Long-Term Scheduler: Decides which processes are admitted to the ready queue (job scheduling).
    • Short-Term Scheduler: Decides which process will execute next (CPU scheduling).
    • Medium-Term Scheduler: May temporarily swap out processes from memory (to disk) to optimize system performance (swapping).
  • Scheduling Algorithms:

    • First-Come, First-Served (FCFS): Processes are executed in the order they arrive.
    • Shortest Job Next (SJN): The process with the smallest execution time is selected next.
    • Round Robin (RR): Each process is assigned a fixed time slice (quantum), and processes are executed in a circular queue.
    • Priority Scheduling: Each process is assigned a priority, and the CPU is allocated to the process with the highest priority.

Illustration: Round Robin Scheduling

 
Process Queue (Circular): [P1] -> [P2] -> [P3] -> [P4] -> [P1] -> [P2] -> ...

Each process gets a fixed time slice (quantum). If a process is not completed within its time slice, it is placed at the back of the queue.

5. Context Switching

When the CPU switches from executing one process to another, it performs a context switch. During this process:

  • The OS saves the current state of the process (the contents of the CPU registers, the program counter, etc.) in its PCB.
  • The OS loads the saved state of the new process into the CPU.
  • This allows the CPU to resume execution from where the new process left off.

Context switching introduces some overhead since it takes time to save and load the state of processes.

Illustration: Context Switching

 
Process P1's state is saved -> Process P2's state is loaded

6. Process Synchronization

In systems with multiple processes running concurrently, processes may need to share resources. Process synchronization is the technique used to ensure that processes do not interfere with each other when accessing shared resources.

  • Critical Section: A part of the code where a process accesses shared resources.
  • Mutual Exclusion: Only one process at a time can execute in the critical section.
  • Semaphores: A synchronization tool used to manage access to the critical section.

Illustration: Critical Section Problem

Process P1: Process P2: Enter CS Enter CS Critical Section Critical Section Exit CS Exit CS

Only one process can be in the Critical Section (CS) at any time to avoid conflicts.

7. Inter-Process Communication (IPC)

Processes often need to communicate with each other, especially in multitasking environments. Inter-Process Communication (IPC) provides mechanisms for data exchange between processes.

  • Shared Memory: Processes share a memory space to exchange information.
  • Message Passing: Processes communicate by sending and receiving messages (e.g., pipes, sockets).

Illustration: Message Passing

 
Process A Process B Send(Message) -> Receive(Message)

8. Process Creation and Termination

  • Process Creation: A process can create a new process using system calls like fork() (in UNIX). The process that creates another is called the parent, and the newly created process is the child.

Process Termination: When a process finishes execution, it is terminated, and its resources are released.