Introduction to CPU Scheduling (Part-1)

Chirag Goyal
4 min readNov 6, 2021

--

CPU Scheduling forms the basis of multiprogrammed systems. CPU scheduling refers to the switching between processes that are being executed. This switching ensures that the CPU is utilised the most and the computer is more productive.

Think 🤔

Occasionally, words like multiprogramming, multicore systems, and multitasking systems can be found in operating systems. Can you think of any distinctions between these terms?

So, in this blog, we will go over the fundamental ideas of CPU scheduling so that you are prepared to master all of the main concepts of CPU scheduling in a good and effective manner.

Purpose of CPU Scheduling

In case of a single processor system only a single process can run at a time. Other processes must wait for the process to complete execution. This shows the motivation for CPU Scheduling. There are many ways we can go about scheduling the processes. Some of these include, wait for the process to completely execute and then move to the next process or we can assign some priority to the process and execute the one with higher priority first another way would be to execute all processes one by one for a small amount of time. Our aim is to schedule the processes in such a way that the overall execution time is lowered.

This is the essence of multiprogramming. Several processes are kept in memory at the same time. When one process waits the CPU jumps to another process and does not remain idle. The purpose of CPU Scheduling is to keep the idle waiting time minimum and have a minimum amount of waiting time for the processes in memory.

Scheduling of this kind is a fundamental operating-system function. Almost all computer resources are scheduled before use. The CPU is, of course, one of the primary computer resources. Thus, its scheduling is central to operating-system design.

CPU — I/O Burst Cycle

Process execution consists of two parts —

(i) cycle of CPU execution and,

(ii) I/O wait.

All processes alternate between these two states until they finish execution. Execution of a process begins with a CPU burst followed by an I/O burst, followed by a CPU burst and so on in an alternating manner, the final CPU burst ends with a system request to terminate the execution of the process.

CPU is responsible for what the CPU does during the waiting time of the I/O burst. Whenever the CPU becomes idle the OS selects one of the processes from the ready queue to be executed. This process is carried out using a short term scheduler also known as a CPU scheduler. This scheduler selects a process from the ones in the ready queue and allocates it to the CPU. There are various scheduling algorithms which determine which process to select.

The alternating sequence of I/O and CPU bursts

CPU scheduling takes place under one of the following circumstances -

  1. When a process changes from running state to waiting state (due to an I/O request or a wait() call).
  2. When a process changes from running state to ready state (in case an interrupt).
  3. When a process changes from waiting state to ready state (after completion of I/O request).
  4. When a process completes execution and terminates.

Out of the above mentioned 4 cases, in case 1 and 4 there is no choice in CPU scheduling. If a process is there in the waiting queue it must be called for execution. Whereas in cases 2 and 3 there is a choice whether to remain idle while the process is in waiting state or to select another process from the ready queue.

In cases 1 and 4 the type of CPU scheduling is named as non-preemptive CPU scheduling. This non preemptive scheduling occurs in cases when there is no choice for the OS in scheduling. In non preemptive scheduling the process keeps the CPU until it finishes execution or switches to the waiting state. This was used in Windows 3.X.

In preemptive scheduling there can be race conditions when data is shared among several processes. The preemption also affects the design of the Operating System kernel. During the processing of a system call, the kernel may be busy with an activity on behalf of a process. Such activities may involve changing important kernel data (such as I/O queues).

We need to ensure that if a process is preempted in the middle of these changes it does not create chaos in the OS. This is usually done by waiting for the system call to complete or for an I/O block to be called before context switching.

Quiz Time 🤔

  1. Complex scheduling algorithms ____________
    a) are very appropriate for very large computers
    b) use minimal resources
    c) use many resources
    d) all of the mentioned
  2. The strategy of making processes that are logically runnable to be temporarily suspended is called ____________
    a) Non preemptive scheduling
    b) Preemptive scheduling
    c) Shortest job first
    d) First come First served
  3. What is Scheduling?
    a) allowing a job to use the processor
    b) making proper use of processor
    c) all of the mentioned
    d) none of the mentioned

Wrap Up (Conclusion)

In this blog, we learned what CPU scheduling is and why it is necessary. We also looked at several forms of CPU scheduling, such as preemptive and non-preemptive scheduling, as well as the challenges that come with them. In the next blogs, we will explore CPU scheduling requirements and look at different CPU scheduling algorithms.

This completes the part-1 of this blog series on CPU Scheduling.

Thanks for reading!

We hope you found the content interesting. If you enjoy it, please share it with your friends. Something not stated, or would you want to offer your thoughts? Please leave a remark below, and we will respond as soon as possible. 😉

--

--

Chirag Goyal
Chirag Goyal

Written by Chirag Goyal

Student of Computer Science and Engineering at IIT Jodhpur

No responses yet