"Operating System Chapter 6 CPU Scheduling" Original: Tulipsys

xiaoxiao2021-03-06  75

The CPU scheduling is the basis for multi-program operating systems. The operating system can improve the productivity of your computer by converting CPUs between processes. In this chapter, we have to introduce basic scheduling concepts and exemplify several different CPU scheduling algorithms. We also have to discuss issues for selecting scheduling algorithms for specific systems.

6.1 Basic Concept

In order to maximize the CPU utilization, the goal of multi-channel programming is to keep there is always a process for execution. In a single processor system, only one process can be run at a time; other processes must wait until the CPU is idle before being re-scheduled.

The idea of ​​multi-channel programming is very simple. A process continues to run until it must wait for some operations (I / O requests to be typical). In a simple computer system, the CPU will be in an idle state when the process is waiting; this isted all waiting time. With multi-channel programming, we can use this time effectively. Retain multiple processes simultaneously in memory. When a process must wait, the operating system evacuates the process and assigns the CPU to another process. Then continue to run in this way.

Scheduling is a basic operating system function. Almost all computer resources need to be scheduled before use. Of course, the CPU is one of the primary computer resources. Therefore, the CPU scheduling is the core problem of operating system design.

6.1.1 CPU-I / O BURST Period CPU Scheduling Depending on Process: The process executes the CPU execution cycle (CYCLE) and I / O wait time. The process alternates between the two states. The process begins with a CPU BURST. Then I / O Burst, then another CPU BURST, then I / O Burst, and so on. Finally, the final CPU BURST ends with a terminated system request, not an I / O Burst (Figure 6.1).

The duration of the CPU Burst can be measured. Although they differ greatly to the process and computers, they tend to the frequency curve shown in Figure 6.2. Since short CPU Burst is very long, this curve is usually manifested as an index distribution or super index distribution. An I / O busy program usually has a lot of very short CPU Burst. A CPU busy program may have a few very long CPU Burst. This distribution can help us choose a suitable CPU scheduling algorithm.

6.1.2 CPU schedulers As long as the CPU is idle, the operating system must select a process from the ready queue. The selection of the process is done by the short-range scheduler (or CPU scheduler). The scheduler makes a selection from the ready process in the memory and assigns a CPU to one of them (the process of scheduler selection).

The ready queue is not necessary is an advanced first out (FIFO) queue. As seen in the various scheduling algorithms later, you can implement the ready queue, priority queue, tree or just a unordered Linked List. However, from the conceptual, all processes in the ready queue queue wait for the chance to get the CPU execution. Records in the queue are usually the process control block (PCB) of the process.

6.1.3 Preemption Scheduling may perform CPU scheduling in four cases:

1. When the process is converted from the running state (for example: I / O request or waiting for a child process) (for Example, I / O Request, or Invocation of Wait For the Termination of One of the child Processes)

2. When the process is converted from the running state to the ready state (for example, when an interrupt occurs)

3. When the process is converted from the waiting state to the ready state (for example: I / O)

4. When the process terminates

There is no scheduling in the first and fourth cases. (In Circumstances 1 and 4, There is no choice in terms of scheduling.) You must select a new process (if there is a process existence in the ready queue). However, in the second and third cases, it is necessary to make a selection. (There is a choice, however, in circumstances 2 and 3.) We call only in the first and fourth cases to scheduling as non-seizure; otherwise, for preemptive. Under the non-seizal schedule, once the CPU is assigned to a process, then the process keeps the CPU until terminates or converts to the waiting state. Microsoft Windows 3.1 and Apple Macintosh use this scheduling. Because non-seizuating schedules are unlocking like a preemptive schedule, a special hardware (such as timer) is required, so it is uniquely available on some hardware platforms.

However, the seizure schedule should pay a certain price. Consider the case of sharing data in two processes. When a process is preempted, it may be updated and the second process is run. The second process may attempt to read this data in an inconsistent state that may be tried to read this data. This requires a new mechanism to coordinate access to shared data; Chapter 7 discusses this issue.

Preemption also affects the design of the operating system kernel. During the processing of system call, the kernel may work for a process. This may change important kernel data (such as the I / O queue). If this process is preempted during this process, and the kernel (or device driver) needs to read or modify the same data structure, what will it? The result may be confusing. Some operating systems (including most UNIX versions) handle this problem by waiting for a system call to end or an I / O blocking before the context transition. This mechanism maintains the simplification of the kernel structure when the kernel seizure process is not allowed when the kernel data structure is inconsistent. The problem is that this kernel execution model is not suitable for real-time computing and multi-channel processing. These issues and their solutions are described in Section 6.4 and 6.5.

In UNIX, the use of code segments is also dangerous. (In The Case of Unix, Sections of Case of Unix) can occur at any time according to the defined interrupt, and the kernel does not always ignore the interrupt, and must ensure that the code segment that the first interrupt affected is not used simultaneously. The operating system has almost always needed to receive interrupts, otherwise it is possible to lose input or output being rewritten. So these code segments can not be accessed simultaneously by several processes, which are disabled in the entrance, and recover when exiting. However, it is prohibited and allowed to take time, especially in multiprocessor systems. For systems to scale effectly Beyond A Few CPUS, Interrupt State Changes Must Be minimized and fine-grained Locking maximized. For instance, this is a challenge to the scalability of linux.

6.1.4 Dispatching Program CPU Scheduling Another component is a dispatch program (Dispatcher). The scheduler is a module that submits the CPU control to the short-range scheduler selection process. Its work includes: conversion context

Convert to user touch

Jump to the correct position in the user program to restart the program

The scheduler should be as fast as possible because each process conversion needs to call it. The scheduler stops a process and starts running another process is called scheduling time.

6.2 Scheduling criteria Different CPU scheduling algorithms have different natures and may be more suitable for a process type (compared to other process types). Which algorithm in a specific environment must take into account the characteristics of various algorithms.

You can use multiple guidelines to compare the CPU scheduling algorithm. The features used in determining the best algorithms can cause various algorithms to produce substantial differences. These criteria include:

CPU Utilization: We hope to keep the CPU busy as much as possible. The CPU utilization may be between 0 and 100. In the actual system, the CPU utilization ranges should be between 40% (system load is lighter) to 90% (heavier system load).

Throughput: If the CPU is busy executing the process, then the work is going on (it is done). One measure of workload is the number of processes completed within the unit time, called throughput. (If the cpu is busy executing processing processing processing processing processes, one measure of work is the number of processes Completed Per Time Unit, Called THROUGHPUT.) The throughput of the longer process may be a process per hour; A shorter transaction may be ten processes per second.

Time Time: For a process, an important indicator is the time it takes to perform. (From The Point of View of a Particular Process, The Important Criterion Is How Long It Takes To Execute That Process.) The time interval submitted from the process to the process is turning time. The turn-turn time is the time waiting for the memory, waiting for the time in the ready queue, the time to execute during the CPU, and the sum of the time of the I / O operation.

Waiting time: The CPU scheduling algorithm does not affect the execution time and I / O operation time of the process; it can only affect the time waiting for the process in the ready queue. Waiting time is the sum of the time in the ready queue.

Response time: In an interactive system, turnasting time may not be the best indicator. The process usually produces some outputs very early, and continues to calculate when the previous result is output to the user. Therefore, another metric is the time from the process submitting request to the first response. This is called the response time, is the time required to start response, not the time required to generate output results. Turning time is usually limited to the speed of the output device.

We want to maximize CPU utilization and throughput, minimize turn-turn time, wait time, and response time. In most cases, we optimize the average. However, in some cases we need to optimize minimal or maximum, not the average. For example, in order to ensure that all users get a satisfactory service, we may need to minimize the maximum response time.

For interactive systems (such as real-time systems), some analysts suggest that the difference in response time is more important than minimizing average response time. The system with reasonable predictive response time is more satisfactory than the system with faster average time. However, the work that minimizes the difference in the CPU scheduling algorithm is very small. The operations are explained when we discuss various CPU scheduling algorithms. A precise example should contain many processes, each process has a sequence consisting of hundreds of CPU BURST and I / O Burst. To simplify, in the example we envisage only one CPU BURST (in millisecond) per process. We have average waiting time. Discuss more accurate assessment mechanisms in Section 6.6.

6.3 Scheduling Algorithm The CPU scheduling determines which process from the ready queue and assigns the CPU. In this section, we describe several existing CPU scheduling algorithms.

6.3.1 First Servel Sandback Algorithm Cely, First Services (FCFS) Scheduling Algorithm is the simplest CPU scheduling algorithm. Using this strategy, first request the CPU to get the CPU first. With a FIFO queue, you can easily implement FCFS. When a process enters the ready queue, its PCB is linked to the tail. When the CPU is idle, the process in the queue header obtains the CPU. Then, the running process is removed from the queue. The code for the FCFS scheduling algorithm is easy to understand.

However, the average time of the FCFS strategy is usually very long. Consider the following process combination, at time 0, a given CPU BURST time length (in millisecond meter):

Process BURST time

P1 24

P2 3

P3 3

If the process arrives in the order of P1, P2, P3, and serves the FCFS rule, we will get the following Gantt chart:

P1 P2 P3

0 24 27 30 The waiting time of the process P1 is 0 milliseconds, the process P2 is 24 milliseconds, and P3 is 27 milliseconds. Thus, the average time is (0 24 27) / 3 = 17 milliseconds. If the order of the process arrives is P2, P3, P1, then the result is as follows:

P2 P3 P1

0 3 6 30 The average waiting time is now (6 0 3) / 3 = 3 milliseconds. The average time is significantly reduced. Thus, the average waiting time under the FCFS policy is usually not the smalle, and the average time will also change if the CPU BURST time of the process changes significantly. In addition, consider the performance of the FCFS scheduling algorithm in the case of dynamics. Assume that there is a CPU busy process and many I / O busy processes. As the process runs in the system, the results may be as follows. (As The Processes Flow Around The System, The Following Scenario May Result.) The CPU Busy process will get the CPU and hold it. During this time, all other processes will end their I / O operations and move to the ready queue waiting for the CPU. When the process is waiting in the ready queue, the I / O device is idle. Finally, the CPU busy process ends its CPU BURST and moves to an I / O device. All I / O busy processes (with very short CPU Burst) are quickly executed and returned to the I / O queue. At this time, the CPU is idle. The CPU busy process will return to the ready queue and assigned to the CPU. Again, all I / O processes are waiting in the ready queue until the CPU busy process is completed. All other processes waits for a large process to release the CPU, which is a CONVOY Effect. If the shorter process is allowed first, then this effect reduces the utilization of the CPU and the device. The FCFS scheduling algorithm is non-seizure. Once the CPU is assigned to a process, the process will hold the CPU until it releases the CPU (by terminating or requested I / O). For the time-time system, the FCFS algorithm is particularly bad because each user in this system share the CPU with a rule time interval. Allow a process for long-term CPUs to produce catastrophic consequences. 6.3.2 Short Job Priority Scheduling Algorithm Another CPU Scheduling Method is Short Job Priority (SJF) Scheduling Algorithm. This algorithm is associated with the CPU BURST length running next time. When the CPU is valid, it will be assigned to the next CPU Burst minimum process. If the next CPU BURST of the two processes is the same, FCFS scheduling is used. It is to be noted that a more appropriate term is the shortest next CPU Burst because the scheduling is done by the next CPU Burst length of the process, not its total length. We use the term SJF because most people and textbooks are mentioned as SJF. As an example, consider the following set of processes, given CPU BURST length: Process BURST time P1 6 P2 8 P3 7 P4 3 Take SJF schedule, we will schedule these processes in accordance with the following Gantti: P4 P1 P3 P2

0 3 9 16 24 P1 Waiting time is 3 milliseconds, P2 is 16 milliseconds, P3 is 9 milliseconds, and P4 is 0 milliseconds. Thus, the average waiting time is (3 16 9 0) / 4 = 7 milliseconds. If the FCFS scheduling policy is used, the average wait time is 10.25 ms. It can be proved that the SJF scheduling algorithm is the best algorithm because it gives the smallest average waiting time for the specified process group. By moving a short process before a long process, the reduction in short processes waiting time is more than the increase in longer process waiting time. Therefore, the average waiting time is reduced. The practical difficulties faced by SJF are difficult to know the length of the next CPU request. For long-range scheduling (or job scheduling) in the batch system, we can use the user to limit the processing time specified when submitting the job. Thus, the user needs an accurate estimation processing time, because the smaller value means the faster response speed. (Too small value will cause time-limited-exceeded ERROR, you need to re-submit your job.) SJF is usually used in long-range scheduling. Although the SJF algorithm is ideal, it is not possible to implement the short-range CPU schedule level. There is no way to know the length of the next CPU BURST. One solution is to try to achieve an approximate SJF schedule. We may not know the length of the next CPU BURST, but we can predict this value. We envisage the next CPU Burst of the process close to it. Therefore, by approximation estimates the length of the next CPU BURST, we can pick a process with the shortest prediction CPU Burst. The next CPU Burst can be typically predicted as an index average of previously measured CPU Burst. (The next CPU BURST IS generally previcted as an exponential ave of the measure) The length of the Nth CPU BURST is TN, and the next CPU Burst we predict is TN 1. Then, for ɑ (0 <= ɑ <= 1), definition: TN 1 = ɑTN (1-ɑ) TN This formula defines an exponential average. TN contains the most recent information; TN contains past historical information. In the prediction, the parameters are controlled recently and the weight of history. If ɑ = 0, then TN 1 = TN, the recent history has no impact (assuming that the current condition is instantaneous); if ɑ = 1 So TN 1 = TN, only the recent CPU Burst has a role (assuming history is old, there is no relationship). More common, ɑ = 1/2, so the recent historical record is equivalent to the past. The initial T0 can be defined as a constant or the average of the entire system.

转载请注明原文地址:https://www.9cbs.com/read-108350.html

New Post(0)