Some comparisons of real-time operating systems and general operating systems

zhaozj2021-02-16  47

The operating system used in embedded real-time systems We are called an embedded real-time operating system, which is both an embedded operating system and a real-time operating system. As an embedded operating system, it has the characteristics of cropped, low resource occupation, low power consumption, etc., as a real-time operating system (discussions on real-time operating system characteristics), are limited to powerful operating systems as a real-time operating system. The real-time operating system mentioned below also refers to a strong operating system. It has a lot difference compared to the general operating system (such as Windows, UNIX, Linux, etc.). Here we will compare these two operating systems. Differences to gradually describe the main features of the real-time operating system.

We have exposed the most in your daily work learning environment. General operating systems are developed by minute operating systems. Most of them support multi-user and multi-process, manage numerous processes and assign system resources for them. The basic design principle of the time-time operating system is to minimize the average response time of the system and improve the throughput of the system, and provide services to users as many users in unit time. It can be seen that the timing operating system focuses on the average performance performance and does not focus on individual performance performance. For example, for the entire system, focusing on the average response time of all tasks, the response time of a single task is concerned, for a single task, focusing on the average response time performed and does not care about a certain response time. Many of the strategies and techniques used in the general operating system reflect this design principle, such as the use of the LRU and other page replacement algorithms in the virtual memory management mechanism, making most of the access demand quickly through physical memory, only Some of the visits demand needs to be completed by tuning, but from the whole, the average access time is not greatly improved compared with the no-memory technology, and at the same time obtains the void space can be much larger than the physical memory capacity. Benefits, the virtual memory technology has been widely used in a general operating system. There are many similar examples, such as indirect index query mechanisms of file storage in the UNIX file system, etc., even the Cache technology in hardware design and the dynamic branch prediction technique of CPUs have also reflected this design principle. It can be seen that this focus on the average performance, that is, the influence of the design principles of statistical performance characteristics is very far-reaching.

For real-time operating systems, we have mentioned, in addition to meeting the functional requirements of the application, more importantly, it is necessary to meet the real-time requirements of the application, and the numerous real-time tasks that make up an app for real-time requirements. Various, in addition, there may be some complex correlations and synchronization relationships, such as execution order restrictions, mutual exclusive access requirements for shared resources, etc., this is a large guarantee for system real-time difficult. Therefore, the most important design principles followed by the real-time operating system are: using various algorithms and strategies, always guarantee the predictability of system behaviors (PREDICTABILITY). Predictability refers to any time at which the system is running, in any case, the resource allocation strategy of real-time operating system can reasonably allocate resources for multiple real-time tasks competed for resources, including CPU, memory, network bandwidth, etc. Real-time requirements for each real-time task can be met. Unlike the general-purpose operating system, the real-time operating system focuses on the average performance of the system, but requires each real-time task to meet the actual time requirements in worst cases, that is, the real-time operating system is paying attention to individual performance. More accurately is the worst case of individuals. For example, if the real-time operating system adopts standard virtual memory technology, the worst case of real-time task execution is that each interview requires tag, so accumulated the task, the runtime is not possible in the worst case. It is predicted that the real-time performance of the task cannot be guaranteed. It can be seen that the virtual memory technology widely used in the general operating system is not appropriate to be used directly in real-time operating systems.

Since the basic design principles of real-time operating systems and general operating systems differ, there is a large difference in the choice of resource scheduling strategies and the method of operating system implementation, these differences are mainly reflected in the following points: (1) Task scheduling strategy:

The task scheduling policy in the general operating system generally adopts the priority-based pre-scheduled scheduling policy. For the priority process, the time slice rotation adjustment method is used, and the user process can dynamically adjust its priority through the system call. The priority of certain processes can be adjusted according to the situation.

Task Scheduling Policy in the Real-Time Operating System is currently using the most widely used, one is a static table driver, and the other is a fixed priority pre-scheduled mode.

The static table drive method refers to the runtime schedule of the train with the real-time requirements of the system running according to the real-time requirements of each task or in the help of the auxiliary tool. This schedule is similar to the running schedule of the train. The starting runtime and run length of each task, the running schedule is no longer changing, and the scheduler is only necessary to start the corresponding task based on this table according to this table. The main advantages of a static table drive mode are:

Ø Running schedule is generated before the system is running, so a more complex search algorithm can be used to find a better scheduling scheme;

Ø Operation scheduler overhead;

Ø The system has very good predictability, and the real-time verification is also relatively convenient;

The main disadvantage of this method is not flexible. Once the demand changes, the entire runtime schedule is to be regenerated.

Due to the very good predictability, this method is mainly used in the field of real-time requirements for aerospace, military, etc..

The fixed priority pre-scheduled mode is basically similar to the priority scheduling mode used in the general operating system, but in the fixed priority predecessor schedule, the priority of the process is fixed, and the priority is Specified by a priority allocation policy (such as rate-monotonic, deadline-monotonic, etc.) before running. The advantages and disadvantages of this way are exactly the advantages and disadvantages of the static table drive mode, which is mainly used in some simple, independent embedded systems, but this method will gradually increase with the continuous maturity and improvement of scheduling theory. Apply in some areas that are very strict in real-time requirements. Most of the real-time operating systems currently in the market are this scheduling mode.

(2) Memory management:

We have conducted some discussions on the false management mechanism. In order to solve the unpredictability of the virtual survival to the system, the real-time operating system is generally used in two ways:

Ø Add the page lock function on the basis of the original false management mechanism, and the user can lock the critical page in memory so that the page will not be swapped by the page by the SWAP program. The advantage of this approach is that it has obtained the benefits of the virtual memory management mechanism for software development, and improves the predictability of the system. The disadvantage is that the design of the mechanism such as TLB is also carried out in accordance with the principles of the average performance, so the predictability of the system cannot be fully guaranteed;

Ø Use a static memory division to divide a fixed memory area for each real-time task. The advantage of this approach is that the system has better predictability. The disadvantage is that the flexibility is not good. Once the task needs to be re-divided, once there is change, it needs to be re-paid, and the benefits of the virtual memory management mechanism are also lost. .

The real-time operating system on the current market generally uses the first management method.

(3) Interrupt processing:

In a general-purpose operating system, most external interrupts are open, interrupt processing is generally completed by the device driver. Since the user process in the general operating system generally does not have real-time requirements, the interrupt handler interacts directly with the hardware device, and there may be real-time requirements, so the priority of the interrupt handler is set to be higher than any user process.

However, it is not suitable for the real-time operating system using the above interrupt processing mechanism. First, the external interrupt is the input of the environment to the real-time operating system, and its frequency is related to the rate of environmental changes, and is independent of the real-time operating system. If the frequency generated by the external interrupt is unpredictable, the time overhead of a real-time task is interrupted by the interrupt handler at runtime is also unpredictable, thereby causing the real-time performance of the task to be guaranteed; if the frequency generated by the external interrupt is It is predicted that the predictability of the entire system can be destroyed once the frequency generated by an external interrupt exceeds its predictive value (such as the false interrupt signal or predicted value itself generated by the hardware failure). Second, the user processes in the real-time operating system generally have real-time requirements, so the priority allocation of the interrupt handler is higher than that of all user processes is not suitable. A interrupt processing method that is suitable for real-time operating systems is: except that the clock is interrupted, shields all other interrupts, the interrupt handler changes to periodic polling operations, which are driven by the core state or device by the user. Support library is completed. The main benefit of this way is to fully guarantee the predictability of the system. The main disadvantage is that the response to environmental changes may not be as fast as the above interrupt processing, and the polling operation reduces the effective utilization of the CPU to a certain extent. Another possible way is to use an external event that cannot meet the needs of the polling method, which is interrupted, and the polling mode is still adopted. However, at this time, the interrupt handler has a priority as the other tasks, and the scheduler is uniformly dispatched by the priority to the processor and the interrupt handler that is in the read state. This approach allows the external event to accelerate, and avoid the second problem in the above interrupt method, but the first problem still exists.

In addition, in order to improve the predictability of the clock interrupt response time, the real-time operating system should be shielded as little as possible.

(4) Mutual exclusion access to shared resources:

General operating systems generally use semaphore mechanisms to address the mutual exclusive access issues of shared resources.

For real-time operating systems, if task scheduling uses a static table driver mode, the mutual exclusion issue of shared resources has been considered when the runtime schedule is generated, and there is no need to consider when running. If the task schedule uses a priority-based approach, the traditional semaphore mechanism is easily priority inversion when the system is run, ie when a high priority task accesses the shared resource by the semaphore mechanism. The amount has been occupied by a low priority task, and this low priority task may be preemptive by other medium priority tasks when accessing the shared resource, so high priority tasks are blocked by many of the priority tasks. It is difficult to get a guarantee in real time. Therefore, in real-time operating systems, there are often some of the traditional semaphore mechanisms, which are introduced, such as priority inheritance protocol, priority top protocol, and Stack Resource Policy. Well solve the problem of priority inversion.

(5) System calls and time overhead of internal operations:

The process is called through the system to get the service provided by the operating system, and the operating system completes some internal management work through internal operations such as context. To ensure the predictability of the system, all system calls in the real-time operating system and the time overhead of the internal operation of the system should be bound, and this boundary is a specific quantization value. These time overheads are not made in the general operating system.

(6) The reusability of the system:

In a universal operating system, the core state system call is often unreportable. When a low priority task calls the core state system call, the high priority task arrives at this time period must wait until the low priority system call is completed. The CPU can be obtained, which reduces the predictability of the system. Therefore, the core state system call in the real-time operating system is often designed to be reusable.

(7) Auxiliary tools: Real-time operating systems provide some auxiliary tools, such as real-time tasks in worst time estimation tools, system real-time verification tools, etc., can help engineers perform real-time verification of systems.

In addition, real-time operating systems have also put forward some requirements for system hardware design, some of which are:

(1) DMA

DMA is a data exchange protocol, with the main function to exchange data between memory and other external devices without the need for CPU participation. One of the most common implementations of DMA is called a cycle stealing mode, that is, first through the bus arbitration protocol and the CPU competition bus control, after obtaining control, data exchange according to user preset operation instructions. Since this cycle stealing method will bring unpredictable additional blocking overhead to the user task, the real-time operating system often requires the system to design the DMA or take some predictable DMA implementations, such as Time-Slice Method et al. .

(2) cache

The main role of Cache is to adopt a relatively small fast storage component to make up for performance differences between high performance CPUs and relative performance, because it can make the average performance performance of the system greatly improve, so in hardware The design is extremely widely used.

However, the real-time operating system pays attention to the average performance performance, but the worst case of the individual, so the worst case of real-time task operation must be considered when real-time verification of the system, that is, each interview has no hit cache. Run time, so when using the auxiliary tool to estimate the implementation time of the real-time task in the worst case, you should temporarily turn off all the Cache features in the system, and activate the Cache function when the system is actually run. In addition, another more extreme approach is to completely use Cache technology at all in the hardware design.

转载请注明原文地址:https://www.9cbs.com/read-27724.html

New Post(0)