Process Management in Operating System

Share Now

Process Management in Operating System covers process states, CPU scheduling, multiprogramming, context switching, and priority management in operating systems.

Process Management in Operating System

Process and Its States

A process is a program running on the computer. The program on the disk is passive means that time program is not using the CPU but when a user wants to run the program then the OS loads the program into the memory and start giving it CPU time. In that particular movement, the programme becomes a process.

Types of Process States

The operating system defines these basic process states:

  • Running State: When the process is currently using the CPU, it is known as the running state. In the multiple programming system, many processes may run at the same time, but the CPU core handles only one process at a moment.
  • Ready State: In this state the process is prepared to run and waiting for the CPU. As soon as the CPU becomes free, then the operating system will schedule the process to run.
  • Blocked (or Waiting) State: In this state the process is paused because it is waiting for something outside the CPU, like waiting for user input, disc read or completion of an I/O operation. Once the event is complete, then the process moves back to the ready state.

Multiprogramming

Multiprogramming keeps the CPU busy by running many processes together. If one process is waiting in that condition, the CPU will switch to another process. This way the CPU is never idle, which helps the CPU to work more efficiently.

(i) Degree of Multiprogramming

The number of processes running simultaneously competing for CPU is called degree of multiprogramming. For example, suppose process A is running. Process A needs input/output, so the CPU switches to Process B. The CPU saves Process A data before switching. Then it loads Process B g\to continue execution.

(ii) Context switching

In multiprogramming, many processes run at the same time. When the CPU changes from Process 1 to Process 2, some time is lost. This change is called context switching.

How it works
  • Save Process A’s data (registers, program counter, etc.) in memory.
  • Load Process B’s data into the CPU -> Process B starts running.
  • When Process A is ready again, the CPU saves Process B’s data and restores Process A’s data.

CPU Scheduling

CPU scheduling decides which process gets the CPU when it is free. The operating system manages this to keep things fair and efficient.

Key performance measures

  • Turnaround Time -> Total time from submitting a job until it finishes.
  • Waiting Time -> Time a process spends waiting in the ready queue before running.
  • Throughput -> Number of processes finished per unit time.
  • CPU Utilisation -> Percentage of time the CPU is busy doing work.

Priority

In priority scheduling each process is given a priorty number. The CPU always runs the process with the hightest priority first. If two process have the same priority then the scheduing with done base on FCFS means first come first serve. There are two types of priority.

  • External Priority: Set by the user when starting a process for example, a user may choose to run a critical program with higher priorty. External priority user decide.
  • Internal Priority: It is decided by the operating system itself. Example, the OS may give higher priorty to system tasks or increase the priority of a waiting process. Internal priority OS decide.

Scheduling Philosophies

Scheduling philosophies are methods where the operating system decides which process gets the CPU. There are two types of scheduling philosophies:

  • Non-preemptive Policy: In this, once the process starts running, it cannot be stopped until the process is finished. In this the process controls the CPU completely. It is good for batch systems and not suitable for real-time systems where the task is urgent.
  • Preemptive Policy: The operating system can suspend a running process if a higher-priority process arrives. It is basically for the urgent tasks and suitable for real-time systems. It improves responsiveness but may cause more context switching.

Multitasking

Multitasking means running many tasks at the same time. It helps to improve the CPU use by overlapping computation. A process can be divided into threads so the tasks can run faster. Threads share the same memory and can communicate easily.

Advantages:

  • Increases flexibility.
  • Reduces idle CPU time.
  • Enhances performance by parallel processing.

Time-sharing

In time-sharing, the multiple users can share the CPU simultaneously. The CPU allocates a fixed time slice to every user. Each user will get a small portion of time in the rotation. This helps every user to feel that they have their own computer.

Advantages

  • CPU never idle.
  • Users get quick responses.
  • Each user gets a their turn.

A state in a time-sharing system

  • Active: the user’s program currently has control of the CPU.
  • Ready: The user program is ready to continue but waiting for turn to get the attention of CPU.
  • Waid: The user is waiting for some I/O operation.

Disclaimer: We have provide you with the accurate handout of “Process Management in Operating System“. If you feel that there is any error or mistake, please contact me at anuraganand2017@gmail.com. The above study material present on our websites is for education purpose, not our copyrights.

Images and content shown above are the property of individual organisations and are used here for reference purposes only. To make it easy to understand, some of the content and images are generated by AI and cross-checked by the teachers.

cbseskilleducation.com

Leave a Comment