LINUX device driver time flow delay execution - reprint

xiaoxiao2021-03-06  66

3 Delaying execution device drivers often require some specific code to delay after a period of time - usually in order to allow hardware to complete certain tasks. This section will introduce many different techniques to achieve delays, which technology preferably depends on the specific conditions in the actual environment. We will introduce all these technologies and point out their own advantages and disadvantages. A very important thing to consider is that the required delay length is more than one clock. Longer delays can utilize system clocks; shorter delays usually must be obtained through software cycles. 6.3.1 Long Delay If you want to delay a few clocks, or if the accuracy of the delay is not high (for example, the number of seconds to delay the number of intenses) is the simplest, the simplest is the following, that is, the so-called " Busy waiting:

Unsigned long j = jiffies jit_delay * hz; while (jiffies

This implementation is of course avoided. We mentioned here, just because the reader may need to run this code at a certain time to better understand other delay techniques.

Still look at how this code works. Because Jiffies in the core of the kernel is declared as a Volatile type variable, each time the C code accesses it, it re-reads it, so the loop can play a delay. Although it is also a "correct" implementation, this busy waiting loop will lock the processor during the delay because the scheduler does not interrupt the process running in the kernel space. Worse, if the interrupt is closed before entering the cycle, the Jiffies value will not be updated, then the condition of the While loop is always true. At this time, you have to press the big red button (refer to the power supply Button).

This delay and several delay methods are implemented in the JIT module. All / proc / jit * files created by the module are delayed for 1 second each time. If you want to test your waiting code, you can read the / proc / jitbusy file, when the read method of the file is called, it will enter the busy waiting loop, delay 1 second; as DD if = / proc / jitbusy BS = 1 like this The command is delayed for 1 second each time you read a character.

It can be imagined that the read / proc / jitbusy file will greatly affect system performance, because the computer can run other processes after the computer is within 1 second.

The better delay method is as follows, which allows other processes to run within the latency interval, although this method cannot be used in hard time tasks or other occasions for time requirements:

While (Jiffies

This example and the variable j in the following examples should be a Jiffies value at the time of delay, the calculation method and busy wait.

This loop (can be tested by reading / proc / jitsched file) The delay method is not optimal. The system can schedule other tasks; the current task does not do anything else outside the CPU, but it is still in the task queue. If it is the unique runoff process in the system, it will also be run (the system call scheduler, the scheduler selects the same process, and then call the scheduler again, then ...). In other words, the machine's load (the number of processes running in the system) is at least 1, and the IDLE process (the process number is 0, "because historical reasons is called" swapper ") will never be run. Although this problem doesn't matter, when the system is idle, the processor load can be reduced, reduce the processor temperature, extend the life of the processor, if it is a laptop, can extend the life of the battery. Moreover, during the delay period, the process is executed, so the process consumed in the delay is recorded. This can be found in running the command time cat / proc / jitsched. In another case, if the system is very busy, the time of the driver waiting will be much more than expected. Once a process is scheduled, the processor will not guarantee that some time will be reassigned to it. If the acceptable delay time has the upper limit, use this way to call Schedule, which is not a safe solution for the driver.

Despite some problems, this cycle delay is a way to work with a bit "dirt" but faster monitor driver work. If a bug in the module is locked throughout the system, you can add a small delay after each Printk statement for debugging, so that before the processor encounters a disgusting bug, all The print message can enter the system log. If there is no such latency, these messages can only enter the memory buffer, but the system may have been locked before klogd is running.

The best way to get a delay is that the request kernel is delayed for us. Whether the driver is waiting for other events, there are two ways to set short-term delays.

If the driver uses a wait queue to wait for an event, and you want to make sure you must run the driver after a period of time, you can use the timeout version of the SLEEP function, which has been introduced in Chapter 5 "Sleep and Wake-up" section. a:

Sleep_on_timeout (wait_queue_head_t * q, unsigned long timeout); interruptible_sleep_on_timeout (wait_queue_head_t * q, unsigned long timeout);

Both implementations allow the process to sleep on the specified wait queue and return when the timeout period (represented by Jiffies) arrives. From this, they achieve a sleep that will not continue to continue. Note that the timeout value indicates the number of jiffies to wait, not an absolute time value. The delay in this way can be seen in the implementation of / proc / jitqueue:

WAIT_QUE_HEAD_T WAIT; Init_WaitQueue_Head (& Wait); Interruptible_sleep_on_timeout (& Wait, Jit_DELAY * HZ);

In the usual driver, you can re-issue the execution below: call a Wake_UP or Timout timeout on the waiting queue. In this particular implementation, no one will call WAKE_UP (after all the other code does not know this at all), so the process always wakes up due to Timeout timeout. This is a perfect and effective implementation, but if the driver does not have to wait for other events, you can get a delay in a more direct way, even if schedule_timeout: set_current_stime (task_interruptible); schedule_timeout (jit_delay * hz);

The above-mentioned code rows (implemented in / proc / jitself) enters sleep until the specified time. Schedule_timeout is also an absolute value that handles a time increment rather than a Jiffies. As in front, between the timeout arrival until the process is actually scheduled, it may consume a small amount of extra time - actually this is not important.

6.3.2 short latency

Sometimes the driver requires very short delay and hardware synchronization. At this point, the use of Jiffies cannot achieve the purpose.

At this time, you will use the kernel function udelay and mdelay *.

u indicate that the Greek letter "MU" (μ), it represents "micro".

The prototypes are as follows:

#include void udelay (unsigned long); Void MDELAY (UNSIGNED Long Msecs);

This function is compiled as an inline function in most architectural structures. The former uses the software cycle to delay the specified number of microseconds, the latter uses udelay to make a loop for easy program development. The BOGOMIPS value is used in the udelay function: its loop is based on integer value loops_per_second, which is the result of calculating BOGOMIPS in the boot phase.

The udelay function can only be used to get a shorter time delay, because the accuracy of the loops_per_second value is only 8 bits, so when calculating longer delays, a considerable error will accumulate. Although the maximum allowable delay is nearly 1 second (because longer delays are overflow), the maximum value of the recommended UDElay function is taken 1000 microseconds (1 milliseconds). A function MDElay can be used when the delay is greater than 11 milliseconds.

Special attention is that udelay is a busy wait function (so MDELAY is also), can't run other tasks in the latency period, so it is very careful, especially MDELAY, unless there is no other way, try to avoid use.

At present, it is still very inefficient when supporting a delay greater than several microseconds and less than 1 clock, but this is usually not a problem, because the delay takes a long enough to make people or hardware noticed. For people, the time interval for one percent is relatively suitable, and 1 millisecond is also long enough for hardware actions.

MDELAY does not exist in Linux 2.0, and the header file sysdep.h makes up this lack.

6.4 Tasks Query

Many drivers need to delay the task until later, but do not want to interrupt. Linux provides three ways: Task queue, tasklet (starting from kernel 2.3.43) and internal-core timer. Task queues and tasklets are very flexible, or long or short delay tasks to future processing, very useful when writing interrupt handles, we will continue to discuss in Chapter 9 "Tasklet and bottom half processing" section . The internal nuclear timer is used to schedule the task to execute a specified time in the future will be discussed in the "Nuclear Timer" section of this chapter.

A typical situation using task queues or tasklets is that hardware does not generate interrupts, but still wants to provide blocking reads. At this point, the device needs to be polled, while careful not to make the CPU burden more unnecessary operations. Wake the read process in a fixed time interval (for example, using the current-> timeout variable) is not a good method, because each polling requires two context switching (once is switched to the read process, running polling code, Another time is a process that returns to perform actual work), and generally, the appropriate polling mechanism should be implemented outside the process. Similar scenarios also provide input to simple hardware devices from time to time. For example, there is a stepper motor directly connected to the parallel port, requiring the motor to move step by step, but the motor can only move one step each time. In this case, the control process notifies the device driver for movement, but in fact, the movement is taken step by step in the periodic time interval after the WRITE is returned.

Quick completion of this type of unknown operation is that the registration task is executed in the future. The kernel provides support for "task queue", tasks can be accumulated, and "consume" when running queues. We can declare your task queue and trigger it at any time, or you can register your task to a predefined task queue, run (trigger) it by the kernel.

This section will first outline the task queue, then introduce the predefined task queue, which enables the reader to start some interesting tests (if an error can also hang the system), and finally describe how to run your own task queue. Next, let's take a look at the new Tasklet interface, in the 2.4 core it replaces the task queue in many cases.

6.4.1 Task queue nature

The task queue is actually a task chain, and each task is represented by a function pointer and a parameter. When the task is run, it accepts a VOID * type parameter, the return value type is Void, and the pointer parameters can be used to incorporate a data structure, or can be ignored. The queue itself is a structure (ie the task) linked list, and is owned by declaring and manipulating their kernel modules. The module should be all responsible for the allocation and release of these data structures, for which static data structures are generally used.

The queue element is described below, this code is directly from the header file copy:

struct tq_struct {struct tq_struct * next; / * linked list of active bh's * / int sync; / * must be initialized to zero * / void (* routine) (void *); / * function to call * / void * data; / * argument to function * /};

BH in the first annotation refers to the bottom half (Bottom-Half). The bottom half is "half of the interrupt handler", we will discuss the interrupt in the "Tasklet and Half Semial" section of Chapter 9. Now, we know that the bottom half is a mechanism for the driver implementation, it is used to process asynchronous tasks, which are usually large, not suitable for completing the hardware interrupt. This chapter does not require you to understand the bottom half, but it will occasionally mention it if necessary.

Translation: In the 2.4 version of the core, the first member variable of TQ_STRUCT has changed, and it is changed to struct list_head list; / * Linked List of active Bh's * / this is because the general bidirectional linked list list_head is in large quantities in the kernel. In many cases, it replaces the list of self-maintained in the data structure. The definition of the corresponding task_queue also changed to typedef struct list_head task_queue; the most important member of the above data structure is Routine and Data. In order to queue the subsequent task, these members of the structure must be set first, and the NEXT and SYNC are cleared. The SYNC flag in the structure is used by the kernel to avoid multiple times of the same task, as this will destroy the next pointer. Once the task is queued, the data structure is considered to be "owned" by the kernel, and cannot be modified until the task begins to run.

Other data structures related to the task queue are Task_Queue, which is currently implemented by a pointer to the TQ_STRUCT structure, which defines this pointer (struct tq_struct *) to another data structure (struct task_queue) is to extend the need, in need At the time, the task_queue can increase other content.

Before use, the task_queue pointer must be initialized to NULL.

The following summarizes all operations that can be performed on the task queue and TQ_STRUCT structure.

Declare_Task_Queue (Name);

This macro declares a task queue with a given name Name and initializes it into empty.

INT Queue_task (struct tq_struct * task, task_queue * list);

As the name of the function, it is used to put the task into the queue. If there is already this task in the queue, return 0, otherwise it returns to 0.

Void Run_Task_Queue (task_queue * list);

The run_task_queue function is used to run the task accumulated on the queue. This function is not required unless you want to declare and maintain your task queue.

Before discussing the details of the task queue, let's take a look at how they work in the kernel.

6.4.2 Running of the task queue

As mentioned earlier, a task queue is actually a functional linked list. When the Run_Task_Queue runs a queue, each item in the list is executed. When writing and task queue-related functions, it is necessary to keep in mind when the kernel is called run_task_queue, and when the kernel calls Run_Task_QUEUE, the actual context will limit the operation. You should not do any assumption in the order of running in the queue, each of which is independently completing its own task.

So when is the task queue run? If you are using a predefined task queue described below, the answer is "when the kernel wheel is there." Different queues run at different times, as long as the kernel does not have other more important tasks, they will always run.

More importantly, when the task is running, the task queue is almost certainly not running. In contrast, they are executed asynchronously. So far, all things in the sample drive routine are done in the process of this execution system call. But when the task queue is running, this process may be sleeping, or running on another processor, and may even exit.

This asynchronous implementation similar to the hardware interrupt situation (we will discuss in detail in Chapter 9). In fact, the task queue is often run as the result of "software interrupt". Under the interrupt mode (or interrupt), the runtime of the code will be listed much. We now introduce these restrictions, these restrictions will also appear multiple times later. We will repeat multiple times, these rules in interrupt mode must be observed, otherwise there will be a big hassle. Many movements need to be executed in the process context. If you are in the process context (such as in interrupt mode), you must comply with the following rules:

Access user space is not allowed. Because there is no process context, there is no way to access the user space associated with any particular process. The Current pointer is invalid in interrupt mode and cannot be used. Sleep or scheduling cannot be performed. The interrupt mode code cannot call Schedule or Sleep_ON; no function that may cause sleep may not be called. For example, call kmalloc (..., gfp_kernel) does not meet this rule. The amount of semapses can not be used because it may cause sleep.

The kernel code can determine if you are running on the interrupt mode by calling the function in_interrupt (), which does not require parameters, returns non-0 value if the processor runs during the interrupt.

There is still a feature of the current task queue implementation. One task in the queue can re-insert it back to its original queue. For example, the task in the timer queue can insert yourself into the timer queue at runtime, so that the next timer tick is run again. This is achieved by calling queue_task to put it back into the queue. Due to the first null pointer to replace the header pointer with the NULL pointer before processing the task queue, it is to initialize the task queue, and before the task in the execution queue, first shift the task from the queue, which will be itself in the task When inserting a task queue, it is actually pointing the pointer to the new task queue. As a result, with the implementation of the old queue, the new queue gradually generates.

Although it seems to have no significance over and over again, it seems that there is no significance, but sometimes this is some. For example, a stepper motor is moved each time until the destination, its driver can be implemented by keeping the task constantly resetting yourself on the timer queue. Other examples There are also JIQ modules that produce outputs by resetting themselves - the result is a multiple iteration using the timer queue.

6.4.3 Predefined Task Series

The easiest way to delay task execution is to use task queues maintained by kernel. Such a queue has several, but the driver can only use three of the following. The definition of the task queue is in the header file , the driver code needs to include the header file.

Scheduler queue

The scheduler queue is more unique in the predefined task queue, which is running in the process context, which means that the task in the queue can be more. In Linux 2.4, the queue is managed by a dedicated kernel thread keventD, accessed by a function Schedule_Task. In the older kernel version, there is no KEVENTD, so the queue (tq_scheduler) is directly operating.

TQ_TIMER

The queue is run by the timer handler (timer 嘀 嘀). Because the handler (see function do_timer) is running during an interrupt, all tasks in the queue are also run during interruption.

TQ_IMMEDIATE

Immediate queue is processed when the system call is returned or the scheduler is running to run the queue as quickly as possible. This queue is processed during the interrupt.

There are other predefined queues, but the driver development usually does not involve them.

The execution process of a device driver using the task queue can be seen in Figure 6-1. The figure demonstrates how the device driver is inserted into the TQ_ImMediate queue in the interrupt handler. Figure 6-1: Task_Queue usage process

How is the sample program work?

The sample program of the delay calculation is included in the JUST IN Queue module, and its partial source code is extracted in this section. This module creates a / proc file, you can read it with DD or other tools, which is similar to the JIT module. The process of reading the JIQ file is transferred to the sleep state until the buffer is full.

The buffer of the / proc file is one page in memory, or corresponding to the size of the platform.

Sleep is processed by a simple waiting queue, the statement is

Declare_wait_queue_head (jiq_wait);

The buffer is filled by the constant task queue. Each run of the task queue will add a string in the buffer to be filled, which records the current time (Jiffies value), the current process, and the return value of the In_Interrupt.

The code of the fill buffer is in the jiq_print_tq function, and every time the task queue is running to call it. The print function does not mean, not listed here, let's take a look at the initialization code of the task inserted into the queue:

Struct tq_struct jiq_task; / * Global: Initialized to zero * / / / * these line 40 in jiq_init () * / jiq_task.routine = jiq_print_tq; jiq_task.data = (void *) & jiq_data;

There is no need to clear the SYNC member of the JIQ_TASK structure, because the static variable has been initialized by the compiler to zero.

Scheduler queue

The most easily used task queue is a scheduler queue because the tasks in the queue will not run in interrupt mode, so you can do more, especially they can sleep. Many of the keys use this queue to complete various tasks.

In the kernel 2.4.0-test11, the actual implementation of the scheduler queue is hidden by the rest of the kernel. The code using this queue must call Schedule_Task to put the task into the queue without using queue_task directly:

INT schedule_task;

The TASK is of course the task to be scheduled. The return value is directly from Queue_Task: If the task is not returned to zero in the queue.

Return once, starting from version 2.4.0-test11, a special process KEVENTD is used, and its unique task is to run the task in the Scheduler queue. Keventd provides a pre- and content of the task it runs, rather than the previous implementation, the task is running in the fully random process context.

There are a few points on the implementation of KEVENTD. First, the task in this queue can sleep, and some kernel code use this advantage. However, good code should only sleep very short, because when KeventD sleep, other tasks in the scheduler queue will not run again. There is also a little need to keep in mind, your task is to share the scheduler queue with other tasks, and these tasks can also sleep. Under normal circumstances, the task in the scheduler queue will run quickly (perhaps even before schedule_task returns). But if other tasks are sleeping, when the turn is performed, the middle lapse time will look for a long time. So those tasks with strict implementation should use other queues.

The / proc / jiqsched file is an example file using the scheduler queue. The file corresponding to the read function puts the task into the queue in the following way: int jiq_read_sched (char * buf, char ** start, off_t offset, int LEN, int LEN, int * EOF, VOID * DATA) {jiq_data.len = 0; / * Nothing printed, yet * / jiq_data.buf = buf; / * print in this place * / jiq_data.jiffies = jiffies; / * Initial time * / / * jiq_print will queue_task () again in jiq_data.queue * / jiq_data.queue = SCHEDULER_QUEUE; schedule_task (& jiq_task); / * ready to run * / interruptible_sleep_on (& jiq_wait); / * sleep till completion * / * eof = 1; return jiq_data. Len;

The read / proc / jiqsched file produces the following output:

Time Delta Interrupt Pid CPU Command 601687 0 0 2 1 Keventd 601687 0 0 2 1 KeventD 601687 0 0 2 1 KeventD 601687 0 0 2 1 KeventD 601687 0 0 2 1 KeventD 601687 0 0 2 1 KEVENTD 601687 0 0 2 1 KEVENTD

In the above output, the TIME domain is a jiffies value at the time of runtime. Delta is the increment of Jiffies since the last run of the task. Interrupt is the output of the in_interrupt function. The PID is the ID of the running process. The CPU is the number being used. (Always 0 in a single processor system), Command is a command that is running in the current process.

In this example, we see that the task is always running in the KEVENTD process, and it is run very fast. A task that constantly submits themselves to the scheduler queue can run hundreds or even thousands of times in a timer tick. . Even in a load that is very heavy, the delay in the scheduler queue is also very small.

Timer queue

The usage method of the timer queue and the scheduler queue are different, it (tq_timer) is possible to operate directly. Also, the timer queue is performed in interrupt mode. In addition, the queue will definitely be operated in the next clock, which eliminates the delay that may cause system loads.

The sample code implements the / proc / jiqtimer using the timer queue. Use this queue to use the queue_task function.

INT jiq_read_timer (char * buf, char ** start, off_t offset, int LEN, INT * EOF, VOID * DATA) {jiq_data.len = 0; / * Nothing printed, yet * / jiq_data.buf = buf; / * print in this place * / jiq_data.jiffies = jiffies; / * initial time * / jiq_data.queue = & tq_timer; / * reregister yourself here * / queue_task (& jiq_task, & tq_timer); / * ready to run * / interruptible_sleep_on (& jiq_wait); / * Sleep Till Completion * / * EOF = 1; Return jiq_data.len;} The following is the result of running the command head / proc / jiqtimer output in my system in compiling a new kernel:

time delta interrupt pid cpu command 45084845 1 1 8783 0 cc1 45084846 1 1 8783 0 cc1 45084847 1 1 8783 0 cc1 45084848 1 1 8783 0 cc1 45084849 1 1 8784 0 as 45084850 1 1 8758 1 cc1 45084851 1 1 8789 0 cpp 45084852 1 1 8758 1 CC1 45084853 1 1 8758 1 CC1 45084854 1 1 8758 1 CC1 45084855 1 1 8758 1 CC1

Note that this time is just a timer tick between every implementation of the task, and it is running may be any process.

Qiji immediately

The last predefined queue that can be used by module code is an immediate quo. This queue runs through the bottom semi-processing mechanism, so it is necessary to use it. The bottom semi-processing program is only running when the notification kernel needs it runs, which is done by the "tag" bottom. For TQ_IMMEDIATE, Mark_BH (Immediate_BH) must be called. Note Mark_Bh must be called after the task is inserted, otherwise the kernel will start the queue when the task has not joined the queue.

Immediate queue is the fastest queue in the system - it is the fastest and runs during interruption. Immediate queues can be executed by the scheduler, or when a process is returned from the system call, it is executed as soon as possible, depending on which event occurs first. Typical outputs are as follows:

time delta interrupt pid cpu command 45129449 0 1 8883 0 head 45129453 4 1 0 0 swapper 45129453 0 1 601 0 X 45129453 0 1 601 0 X 45129453 0 1 601 0 X 45129453 0 1 601 0 X 45129454 1 1 0 0 swapper 45129454 0 1 601 0 x 45129454 0 1 601 0 x 45129454 0 1 601 0 x 45129454 0 1 601 0 x 45129454 0 1 601 0 x Obviously the queue cannot be used for the execution of the delay task - It is a "immediate" queue. Instead, its purpose is to enable the task as soon as possible, but to "safe time". This is very useful for interrupt processing because it provides an entry point for executing handler code outside of the actual interrupt handler, such as the mechanism to receive the network package.

Be careful not to re-register the task to the immediate quotation (although the / proc / jiqimmed is done to demonstrate), this approach is nothing benefit, and it will lock your computer when you run in some version / platform. Because it will run back immediately until it is empty in some implementations. This occurs, for example, when running 2.0 on the PC.

6.4.4 Run your own work queue

Declare a new task queue is not difficult. The driver can arbitrarily declare an or more new task queues. The use of these queues is similar to the predefined queues we discussed earlier.

Unlike a predefined queue, the kernel does not automatically handle custom task queues. Customized task queues should be maintained by the programmer himself and schedule a running method.

The following macro is a custom queue and expands to variable declarations. It is best to put it on the place where the file is started, and all functions:

Declare_task_queue (tq_custom);

Declare the queue, you can call the following functions to queue the task. The above macro and the following call match:

Queue_task; & tq_custom

When you want to run the accumulated task queue, perform the following line and run the TQ_CUSTOM queue:

Run_task_queue (& tq_custom);

If you want to test the custom task queue now, you need to register a function in a predefined queue to trigger this queue. Although it looks like a detour, it is not the case. The custom task queue is very useful when the cumulative task is required to be executed, although this "simultaneously" is required to be determined using another queue.

6.4.5 Tasklet

As far as the 2.4 kernel is released, developers add a new mechanism for kernel task delays. This new mechanism is called Tasklet, and it is now a recommended method for achieving the bottom semi-task. In fact, the bottom semi-processing program itself is implemented with tasklet.

Tasklet is similar in many ways. They are all ways to delay the task to the safe time, all run during the interrupt. Like the task queue, even if it is scheduled multiple times, Tasklet is only once, but Tasklet can run on the SMP system and other (different) tasklets in parallel. On the SMP system, Tasklet is also ensured to run on the first scheduled CPU, as this can provide better cache behavior, thereby improving performance. Each Tasklet is associated with a function that the function is called when the tasklet is running. This function has only one parameter of the unsigned long type, which makes some kernel developers' life easily; but it is definitely distressed for developers who would rather pass a pointer. Converting the parameters of the long type into a pointer type is safe operation on all supported platforms, and is generally used in memory management (discussed in Chapter 13). This Tasklet's function is Void, no reference.

The implementation of the tasklet is in , it must declare one of the following:

Declare_Tasklet (Name, Function, DATA);

Declaring a Tasklet with the specified name Name, when the tasklet is executed (later, the specified function function is called, the passing parameter value is (unsigned long) Data.

Declare_tasklet_disabled (name, function, data);

Declaring a tasklet as above, but the initial state is "prohibited", which means that it can be scheduled but not executed until it is "enabled".

Compile the JIQ sample driver with 2.4, you can implement the / proc / jiqtasklet, which is similar to other JIQ entrances, but use Tasklet. We did not implement a tasklet in sysdep.h. This module defines its tasklet:

Void jiq_print_tasklet (unsigned long); Declare_Tasklet (jiq_tasklet, jiq_print_tasklet, (unsigned long) & jiq_data);

When the driver wants to schedule a taskled run, it calls tasklet_schedule:

Tasklet_schedule; & jiq_tasklet;

Once a Tasklet is scheduled, it will definitely run once in a safe time (if it has been enabled). Tasklet can re-scheduled yourself, just like the task queue. On multiprocessor systems, a Tasklet does not have to worry that you will run simultaneously on multiple processors, because the kernel takes an action to ensure that any tasklet can only be run in one place. However, if multiple tasklets are implemented in the driver, there may be multiple tasklets to run at the same time. In this case, it is necessary to use a spin lock to protect the critical area code (the amount of semaphore is sleep, because the tasklet is running during the interrupt, so it cannot be used for Tasklet).

/ proc / jiqtasklet output is as follows:

time delta interrupt pid cpu command 45472377 0 1 8904 0 head 45472378 1 1 0 0 swapper 45472379 1 1 0 0 swapper 45472380 1 1 0 0 swapper 45472383 3 1 0 0 swapper 45472383 0 1 601 0 X 45472383 0 1 601 0 X 45472383 0 1 601 0 x 45472383 0 1 601 0 x 45472389 6 1 0 0 Swapper Note that this tasklet is always running on the same CPU, even if it is output from a dual CPU system.

The Tasklet subsystem provides some other functions for advanced Tasklet operations:

Void tasklet_disable (struct tasklet_struct * t);

This function prohibits the specified tasklet. The tasklet can still scheder with tasklet_schedule, but execution is postponed until it is enabled.

Void tasklet_enable (struct tasklet_struct * t);

Enable a Tasklet previously prohibited. If the Tastlet has been scheduled, it will soon run (but run directly from Tasklet_enable).

Void tasklet_kill (struct tasklet_struct * t);

This function is used to deal with those Tasklets who have reached their own re-scheduled their own. Tasklet_kill deletes the specified tasklet from all of its queues. In order to avoid racing with positive resettlement Tasklet, the function will wait until the tasklet execution, then remove it out of the queue. This ensures that tasklet will not be interrupted in the middle. However, if the target Tasklet is currently not running, there is no heavy adjustment yourself, Tasklet_kill will hang. Tasklet_kill cannot be called during an interrupt.

6.5 internal nuclear timer

The final timing resource in the kernel is still a timer. Timers are used for scheduling functions (timer handlers) execute at a certain time in the future. Unlike the task queue and tasklet, we can specify where a function is called in the future, but cannot determine when the task in the queue will be performed. In addition, the internal nuclear timer is similar to the task queue, and the registered processing function is only executed - the timer is not looped.

Sometimes the operation to be executed is not in any process context, such as turning off the floppy drive motor and aborting a shutdown operation, in which case the returns from the Close call is not suitable for the application, and the use task queue is Very wasteful, because until the time must be done, the task in the queue needs to be re-registered.

At this time, it is more convenient to use the timer. The registration handler is once again, and the kernel calls it once when the timer is timeout. This treatment is generally suitable for completion of the kernel, but sometimes the driver is also required, just like a floppy drive motor.

The internal nuclear timer is organized into a two-way linked list. This means that we can add any more timers. The timer includes its timeout value (unit is jiffies) and a function to be called timeout. The timer handler needs to receive a parameter, which is placed in a data structure together with the handler function pointer itself.

The data structure of the timer is as follows, taken from the header file :

struct timer_list {struct timer_list * next; / * never touch this * / struct timer_list * prev; / * never touch this * / unsigned long expires; / * the timeout, in jiffies * / unsigned long data; / * argument to the handler * / Void (* function); / * handler of the timeout * / volatile int running; / * added in 2.4; don't touch * /}; Timer's timeout value is a Jiffies value, when Jiffies value When it is equal to Timer-> Expires, the Timer-> Function function is running. The Timeout value is an absolute value, which is usually calculated using the current value of Jiffies plus the required delay.

Once the initialization of the Timer_List structure is completed, the add_timer function is inserted into an ordered chain table, which is queried by about 100 times per second. Even if some systems (such as alpha) use higher clock interrupt frequencies, they do not check the timer list more frequently. Because if the timer resolution is added, the cost of traversing the list will also increase accordingly.

The following functions are used to operate the timer:

Void init_timer;

This inner function is used to initialize the timer structure. Currently, it simply clears the Prev and NEXT pointer (there is a run flag on the SMP system). It is highly recommended that the programmer uses this function to initialize the timer instead modify the pointer within the structure to ensure forward compatibility.

Void add_timer (struct timer_list * time);

This function plugs the timer into the global queue of the active timer.

INT MOD_TIMER (Struct Timer_List * Timer, unsigned long expiffic;

If you want to change the timeout time of the timer, call it, call the post-timer to use the new ExpiRES value.

INT DEL_TIMER (Struct Timer_List * Timer);

If you need to delete it from the list before the timer timeout is required, you should call the DEL_TIMER function. But when timer is timeout, the system will automatically remove it from the list.

INT DEL_TIMER_SYNC (Struct Timer_List * Timer);

The function of this function is similar to del_time, but it also ensures that the timer function is running on any CPU when it returns. When a timer function is not expected time, use del_timer_sync to avoid generating competitions, and this function should be used in most cases. When calling del_timer_sync, it must be guaranteed that the timer function will not use Add_timer to re-join it yourself.

An example of using a timer is the JIQ sample module. / Proc / JitiMer file uses a timer to generate two lines of data, the print function used, and the same in the front task queue. The first line of data is generated by the READ call (user process call to the view / proc / jitimer), and the second line is printed after 1 second.

The code used for / proc / jitiMer file is as follows:

struct timer_list jiq_timer; void jiq_timedout (unsigned long ptr) {jiq_print ((void *) ptr); / * print a line * / wake_up_interruptible (& jiq_wait); / * awaken the process * /} int jiq_read_run_timer (char * buf, char * * Start, OFF_T Offset, Int Len, Int * EOF, VOID * DATA) {jiq_data.len = 0; / * prepare the argument for jiq_print () * / jiq_data.buf = buf; jiq_data.jiffies = jiffies; jiq_data.queue = NULL; / * do not requeue * / init_timer (& jiq_timer); / * init the timer structure * / jiq_timer.function = jiq_timedout; jiq_timer.data = (unsigned long) & jiq_data; jiq_timer.expires = jiffies HZ; / * one second * / jiq_print (& jiq_data); / * print and go to sleep * / add_timer (& jiq_timer); interruptible_sleep_on (& jiq_wait); del_timer_sync (& jiq_timer); / * in case a signal woke us up * / * eof = 1; return JIQ_DATA.LEN;} Run the command head / proc / jitimer to get the following output:

Time Delta Interrupt Pid CPU Command 45584582 0 0 8920 0 Head 45584682 100 1 0 1 Swapper

From the output, it can be found that the timer function of the last line of printing is running in the interrupt mode.

It may be a bit strange that the timer can always timeout correctly, even if the processor is performing system calls. I have mentioned earlier that the process running in the kernel is not called, but the clock interrupt is an exception, it has nothing to do with the current process, complete its own task independently. Readers can try while reading / proc / jitbusy files while reading / proc / jitimer files at the front desk / proc / jitiMer file. At this time, although it seems that the system seems to be sent to the lock dead, the timer queue and kernel timer can still be processed.

Therefore, the timer is another inset resource, even in a single processor system. Any data structure access to the timer function is protected to prevent concurrent access, and the protection method can use atomic types (chapter 10) or with a spin lock.

Be careful to avoid collapse when deleting timers. Consider such a situation: The timer function of a module is running on one processor, at which time related events occurred on another processor (the file is turned off or the module is deleted). As a result, the timer function waits for a state that no longer appears, causing the system to crash. In order to avoid this kind of competition, you should use del_timer_sync in the module to replace DEL_TIMER. If the timer function is capable of restarting your timer (this is a common mode), a "stop timer" flag should be added and set before calling DEL_TIMER_SYNC. Such a timer function can be checked if it is executed, and if it has been set, you will not re-schera yourself with add_timer. There is also a situation that will cause a competitive, modify the timer: first delete the timer first, and add a new to the modification with add_timer. In fact, in this case, MOD_TIMER is a better way to use mod_timer.

6.6 Backward compatibility

Task queues and time mechanisms have reached a relatively stable in many years. However, there are still some attention to improvement.

Sleep_on_timeout, interruptible_sleep_on_timeout and schedule_timeout These functions are added in version 2.2. In the period of 2.0, the timeout value is processed by a variable in the TASK structure. For a comparison, the current code is called:

Interruptible_sleep_on_timeout (my_queue, timeout);

And before, it was written like this:

Current-> timeout = Jiffies Timeout; Interruptible_sleep_on (my_queue);

The header file sysdep.h rebuilt Schedule_timeout for 2.4, so you can use new grammar and function properly at 2.0 and 2.2 versions:

Extern inline void schedule_timeout (int Timeout) {current-> timeout = jiffies timeout; current-> state = task_interruptible; schedule (); current-> timeout = 0;}

The 2.0 version also has two other functions to put the function into the task queue. When the interrupt is disabled, you can use queue_task_irq instead of Queue_Task, which increases a little performance. Queue_task_irq_off is faster, but it will be in error when the task has been inserted or is running, so it can only be used when ensuring such a situation. These two functions have no benefits in terms of enhance performance, starting from core 2.1.30. In any case, you can work in all kernel versions using queue_task. (To pay attention, in 2.2 and its previous kernel, the type of queue_task return value is Void)

2.4 There is no Schedule_task function and the KeventD process before the kernel, and another predefined task queue TQ_SCHEDULER is used. The task in the TQ_SCHEDULER queue is executed in the Schedule function, so always run in the process context. However, the process of "providing" context is always different, it is possible to be any process that is being scheduled to be run by the CPU. TQ_SCHEDULER usually has a relatively large delay, especially for those who will repeat their own tasks. SysDep.h is implemented on Schedule_Task on the 2.0 and 2.2 systems: Extern inline int schedule_task {queue_task (task, & tq_scheduler); return 1;}

As mentioned earlier, the Tasklet mechanism has been added to the 2.3 kernel series. Prior to this, only the task queue can be used for "immediate delay" execution. The bottom semi-processing portion is also changed, but most changes are transparent to the driver developers. SysDep.h no longer simulates the implementation of the Tasklet on the old kernel, which is not strictly necessary for the driver operation. If you want to keep backward, write your own analog code, or use the task queue instead.

There is no In_Interrupt function in Linux 2.0, instead of a global variable INTR_COUNT, record the number of interrupt handler that is running. Query the syntax of INTR_COVNT and call IN_InterruPt, so it is easy to maintain compatibility in sysDep.h.

The function del_timer_sync is not introduced before the kernel 2.4.0-test2. Some replacements are made in sysDep.h so that you can compile using old kernel headers. The 2.0 version kernel has no mod_timer. This issue is also resolved in the compatibility header file.

6.7 Quick Reference

This chapter introduces the following symbol:

#include

Hz

The HZ symbol points to the clock ticks generated every second.

#include

Volatile unsigned long jiffies

Jiffies variables are added to each clock 1, so it increases Hz times per second.

#include

RDTSC (Low, HIGH);

RDTSCL (Low);

Read the timestamp counter or its low half. The header file and macro are unique to the PC processor, and other platforms may need to implement similar functions with assembled statements.

Extern struct timeval Xtime;

Current time, the most recent timer tick is calculated.

#include

Void do_gettimeofday (struct timeval * TV);

Void get_fast_time (struct timeval * tv);

These two functions returned to the current time. The former has a high resolution, the latter is faster, but the resolution is poor.

#include

Void udelay (unsigned long);

Void MDELAY (UNSIGNED Long Msecs);

These two functions introduce a delay of microseconds or milliseconds of integer numbers. The previous applied no more than 1 millisecond; after a latter use, it should be carefully used because they are all busy waiting loop.

INT in_Interrupt ();

If the processor is running in the interrupt mode, it returns a non-0 value.

#include

Declare_task_queue (variablename);

This macro declares a new variable and is initialized. Void queue_task (struct tq_struct * task, task_queue * list);

This function registers a later execution task.

Void Run_Task_Queue (task_queue * list);

This function runs the task queue.

Task_queue tq_immediate, tq_timer;

These predefined task queues are executed as quickly as possible before the new process of kernel scheduling, or after each clock tick (tq_timer).

INT schedule_task;

Scheduling a task runs in the scheduler queue.

#include

Declare_Tasklet (Name, Function, DATA)

Declare_tasklet_disabled (name, function, data)

Declare a Tasklet structure, run when it will call the specified function function (and pass the specified parameter unsigned long Data to the function). The second form initializes the tasklet to a prohibited state until the tasklet can be run until it is explicitly enabled.

Void tasklet_schedule (struct tasklet_struct * tasklet);

Scheduling the specified tasklet run. If the tasklet is not disabled, it will be executed very quickly on the CPU called Tasklet_schedule.

Tasklet_enable (struct tasklet_struct * tasklet);

Tasklet_disable (struct tasklet_struct * tasklet);

These two functions are enabled and prohibited from specified Tasklets. The prohibited tasklet can be scheduled, but only after you can run.

Void tasklet_kill (struct tasklet_struct * tasklet);

A tasklet that is "endless re-schedule" stops. This function can be blocked and cannot be called during an interrupt.

#include

Void init_timer;

This function initializes the newly allocated timer.

Void add_timer (struct timer_list * time);

This function plugs the timer into the global queue to be processed.

INT MOD_TIMER (Struct Timer_List * Timer, unsigned long expiffic;

This function is used to change the timeout time in a scheduled timer structure.

INT DEL_TIMER (Struct Timer_List * Timer);

The DEL_TIMER function removes the timer from the Timer queue to be processed. If the timer exists in the queue, del_timer returns 1, otherwise it returns 0.

INT DEL_TIMER_SYNC (Struct Timer_List * Timer);

This function is similar to del_timer, but makes sure the timer function is not running on other CPUs.

All rights reserved, don't reprint

转载请注明原文地址:https://www.9cbs.com/read-112470.html

New Post(0)