LINUX embedded real-time operating system development and design (3)

xiaoxiao2021-03-06  123

Chapter 2 Related Studies on Real Time System

Although the current operating system has become a wide variety, UNIX and its compatible systems are still operating systems in industrial and academic fields. Some non-Unix systems, such as Windows NT, is also compatible with POSIX.1003 standards, this standard is undoubtedly based on UNIX. The success of this system is due to its openness, stability and facts of the facts. With the release of the POSIX1003.1B real-time extension, UNIX has the opportunity to become the most widely distributed real-time processing platform. As a class UNIX system, Linux has achieved increasing extensive applications with its advantages of open source.

Due to the above reasons, in this chapter I will focus on the real-time system related to Linux. I will discuss the issue of real-time operations on Linux, and how to solve these problems in some systems.

2.1 Temporation of Linux

UNIX was originally designed as a tycological system [17]. Linux is a clone of UNIX, many of the current implementations still retain these features. They strive for optimal average performance. This goal is usually in demands with the low delay and high prophesity of real-time systems. To illustrate this problem, let's consider a program that via the speaker (program 2.1).

#define delay 10000

Main ()

{

INT I;

While (1) {

For (i = 0; i

Speaker_on ();

For (i = 0; i

Speaker_off ();

}

}

Program 2.1 Simple Voice Program

The driver of the speaker is assumed to have only two states or OFF. It seems that this program can output the square wave according to a given cycle, so that the speaker sounds normally. However, when running a standard Linux program, it will not sound correctly.

I run this program on a Linux operating system with a 412 MHz Celeron processor. When there is no other program in the system, the speaker makes a stable sound. Every tick can be heard. When there is a button actions or move the mouse, it will cause intermittent sound. When executing a disk operation or a high shipping program, the sound will become serious distortion. Finally, start a large program, such as X-Windows, the speaker will continue to vote for about half seconds. If this program is to control stepper motors, instead of making the speaker, the program will not allow the motor to operate stably.

The principle of Linux design and implementation is generally the same as UNIX [12]. They are all scheduling, low timing resolution, non-prodigular kernel, off interrupt and virtual memory. We consider these issues on the details.

The scheduler is a set of policies and mechanisms built in the operating system, which determines which work will be completed by the computer [4].

Most UNIX operating systems, especially Linux operating systems, their schedule pursuit is the average response time, throughput, and fair CPU time allocation between processes [16]. The priority of each process is determined by the dynamic process-based CPU time, input / output intensity, and some factors.

The Linux system uses a fixed time slice to schedule the CPU time. The starting process gives a high priority. If this process gives up the CPU within a certain process, its priority will not change, or higher. On the other hand, if a process is used, its priority will be low. This strategy is concerned about interactive programs, such as editor, because such programs have more time spending time waiting for I / O input and output. Although it is advantageous for users before terminal. Since the execution of the program is completely dependent on complex, unpredictable system loads and other activities, this scheduling is completely useless for real-time processes.

The POSIX real-time extension is added in Linux, introducing the concept of real-time processes, allowing a process to be defined as a real-time process. Linux distinguishes real-time processes and ordinary processes, using different scheduling strategies. That is, first serve the service scheduling (SCHED_FIFO) and the time slice rotation (SCHED_RR). In the SCHED_RR schedule, once the time chip is used to move to the tail of the priority queue and allow other tasks of the same priority to run. If the same priority does not have other tasks, the task continues to run the next time piece. SCHED_FIFO is a strategy that is running until blocking. The SCHED_FIFO task is scheduled by priority, once started to end or block it on a certain resource. Unlike the SCHED_RR task to share the processor. There is also a precision problem of the timer. Previously provided to the user process alert signal and the SLEEP () system call only 1 second, so rough timing accuracy is not suitable for most real-time processes. The current version provides higher accuracy time intervals, however, the intrinsic clock implementation limits the correctness of the timing. The content of this will be discussed in more detail later.

Most of the Linux core processes are unable to interrupt [10]. In other words, once a process enters the core mode, it will run until the system call is completed or blocked. If there is a higher priority real-time process during this period, it will have to wait. This design is simpler than the development of kernels because there is no need to consider the problem of kernel revenue. However, a system call may take a long time, and a long delay is unacceptable for a real-time process.

The problem associated with the prior karries is the synchronization of the system. In order to protect the data may be an asynchronous operation, the interrupt processing function is interrupted, and the system designer usually selects the interrupt in the critical area code. Compared to Semaphores or Spinlocks this is a simple and effective technology. However, the disrupting is a fold of systemic capacity and system's rapid response to external events. This method is still not solving the synchronization problem of multiprocessor systems.

Linux systems use virtual memory for paging [10]. Virtual memory technology is just the protection program in the Run section in the RAM, allowing the running program to exceed the capacity of the system RAM. This approach will run well in the time-time system. However, for real-time systems, the system is unable to achieve an unbearable point in the virtual memory.

All these factors considering, clearly traditional Linux is not possible for real-time processing. We need some fundamental changes.

2.2 Linux Performance Test

In order to have an intuitive understanding of the performance of Linux, I have tested the Linux system. The contents of the test include interrupt delay time and context switching. Analyze the results of the test to find ways to improve the performance of Linux delay.

2.2.1 Interrupt Delay Test

Interrupts can be divided into two different types: synchronous and non-synchronous interrupts. For applications, important is non-synchronous interrupts. The case where the non-synchronous interrupt occurs is shown in Figure 2.1. The interrupt response time is the time difference between the interrupt occurs between the interrupt handler start execution. This time difference includes until the running task stops and displays the dispatch time.

Figure 2.1 Asynchronous interrupt and interrupt response time

Interrupt response time is not a constant. It is related to the operating system and hardware platform. To measure the time of accurate closing interrupts, it is not performed by the above definition. Because from interruptions to the current task stops belonging to the interrupt delay time. In Linux, the kernel or driver explicitly off / open interrupt, typically by calling __cli () / __ sti (). Interrupt delay program calculates the time between a pair of __cli () / __ STI () calls. When the __cli () is called, the system time value is recorded, and the system time value when __sti () is called. The time difference between them is the interruption time. The closing time under Linux is shown in Figure 2.2:

Off Interruption Time Test Program Reconfers the __cli () / __ sti () macro to allow logging to call their files and where to call. Record this information to analyze those critical interruptions in Linux. (Interrupt Test Program) In Appendix A) I have conducted Linux for approximately three hours, and the results of the test are shown. Run some programs in the test, including a disk loop copy program, open some applications. It can be found that the system load is relatively heavy, and the system's page scheduling has spent more time, nearly 500 microseconds. Table 2.1 Table 2.2 is a statistical result.

Figure 2.2 Off Interruption Time

Table 2.1 Interrupt Closed Template

Table 2.2 Interrupt Closed Time Probability Density Function Town

It can be seen that the system interrupts in my test system reaches up to 496 microseconds, and general interrupt closing time is around 250 microseconds to 300 microseconds. This test did not perform all the cases, from these results, we can see that Linux system designers use the scheduled scheduling, low timing resolution, non-occupied core, off interruption and virtual memory is caused The reason why the system off interruption is too long.

2.2.2 Context Switching Test

The context switching time is to save a process status and then restore another process status. I wrote a test program to test this time (see Appendix B). When the program is running, it is decided to create how many processes created according to the input parameters. All processes are connected with a ring Unix pipeline. A token is implemented between these processes between these processes, forcing the context between the processes. The program records the time spent on the process to pass the token 2000. There are two overhead of each token: context switch overhead and token transfer overhead. The program first calculates the overhead delivered in the ring pipe, and the result has been removed in the output.

In order to calculate the more real handover time, I joined the human data inside, the process switching time includes the time to save the user-level data state. The results of the test show in Table 2.3, the Y axis represents the switching time, the X axis represents the number of processes, and the size represents the size of the process.

From the results, the process has increased as the process of the process, the switching time is increasing, and the increase in the increase in the increase in 16K, because the size of the process has not exceeded the size of the cache, more than 16K, more increased When the process size is 64K, the switching time reaches 300 microseconds. The reason for the linux switching time is too much is that the system has saved too much state. During the context switching process, the system is interrupted, meaning that the system level interrupt time has exceeded 300 microseconds. It is unacceptable to real-time applications.

Table 2.3 Context switch time

转载请注明原文地址:https://www.9cbs.com/read-125798.html

New Post(0)