I. Soft Interrupt Overview
Soft interrupt is the concept of hardware interrupt, using software to simulate, realize a macro asynchronous execution. In many cases, soft interrupts and "signals" are somewhat similar, while soft interruptions are corresponding to hard interrupts. "The hard interrupt is an interrupt of the external device to the CPU." Soft Interrupt is usually a hard interrupt service program to the kernel. Interrupt "," signal is the interrupt of the kernel (or other process) to a process ("Linux kernel source code analysis" Chapter 3). A typical application of soft interrupt is the so-called "Bottom Half), its name is from two stages of hardware interrupt processing into" upper half "and" lower half ": The half is run in the context of the mask interrupt, used to complete the key processing action; and the second half is relatively urgent, it is often still time consuming, so the system is arranged by the system, not interrupting The service context is executed. The application of Bottom Half is also the cause of the incentive kernel to develop the current soft interrupt mechanism, so we start with the implementation of the Bottom Half.
2. Bottom Half
In the Linux kernel, Bottom Half usually uses "BH", which initially used to complete the non-critical time consumption operation of the interrupt service in the lower privilege level, and now it is also used in the context of low priority. Asynchronous action. The earliest bottom HALF is a way to borrow the interrupt vector table, still can still be seen in the current 2.4.x kernel:
Static void (* BH_BASE [32]) (Void); / * kernel / Softirq.c * /
The system defines a function pointer array, with a total of 32 function pointers, using the number of sets of index to access, and the corresponding set of functions:
Void init_bh (int NR, void (* routine) (void);
Assign the value of the NR function pointer to ROUTINE.
Void Remove_BH (INT NR);
The action is in contrast to init_bh (), and the NR function pointer is removed.
Void Mark_BH (INT NR);
The sign of the NR Bottom Half is executed.
Due to historical reasons, BH_BASE's various function pointer locations have a predefined meaning, such a enumeration in the V2.4.2 core:
ENUM {
Timer_bh = 0,
TQueue_bh,
Digi_bh,
Serial_bh,
RISCOM8_BH,
Specialix_BH,
Aurora_bh,
ESP_BH,
SCSI_BH,
Immediate_bh,
Cyclades_bh,
CM206_BH,
JS_BH,
Macserial_bh,
ISICOM_BH
}
And agreed to use a Bottom Half position, such as serial_bh, now we use more main_bh, tqueue_bh and immediate_bh, but the semantics is very different because the entire Bottom Half has been used Very different, the three functions are only compatible on the interface, and it has been changing with the soft interrupt mechanism of the kernel in the implementation. Now, in the 2.4.x core, it uses the tasklet mechanism.
Task Queue
Before introducing Tasklet, it is necessary to see the Task Queue mechanism for the earlier. Obviously, the original Bottom HALF mechanism has several great limitations. The most important one is that the number is limited to 32, and the application range of soft interrupts is getting bigger and larger, this number is obviously It is not enough, and each Bottom Half can only be hung on a function, it is not enough. Therefore, in the 2.0.x kernel, it has been expanded with Task Queue (task queue), which is used herein. Task Queue is built on the basis of the system queue data structure, the following is the data structure of Task Queue, which is defined in include / Linux / Tqueue.h:
Struct tq_struct {
Struct list_head list; / * Link table structure * /
Unsigned long sync; / * First 0, the atom's set of atoms is 1 to avoid repeating the team * /
Void (* routine) (VOID *); / * The function called when activation * /
Void * Data; / * Routine (data) * /
}
Typedef struct list_head task_queue;
When using, follow these steps:
DECLARE_TASK_QUEUE (my_tqueue); / * define a my_tqueue, it is actually a to tq_struct as elements list_head queue * /, and define a tq_struct variable my_task; queue_task (& my_task, & my_tqueue); / * register my_task to my_tqueue in * / run_task_queue (& my_tqueue); / * Start MY_TQUEUE * / in appropriate time
In most cases, there is no need to call declare_task_queue () Define your own task queue because the system has predefined three Task Queue:
TQ_TIMER, started by the clock interrupt service program; tq_immediate, started before the interrupt returned and the Schedule () function; TQ_Disk, the memory management module is used inside.
Most asynchronous tasks can be done using TQ_IMMEDIATE.
Run_task_queue (task_queue * list) function can be used to launch all Task mounted in the list, you can manually call, or hook it in the Bottom Half vector table mentioned above. With Run_Task_Queue () as a function pointer of BH_BASE [NR], it is actually expanded to the number of function handles of each bottom Half, and for system predefined tq_timer and tq_immediates are indeed hiped on tQueue_bh and immediate_bh (note, timer_bh Without such use, tQueue_bh is also started in do_timer (), so that the number of Bottom Half can be used. At this time, it is not necessary to manually call Run_Task_Queue () (this original is not suitable), and simply call Mark_BH (Immediate_BH), let the Bottom HALF mechanism schedule it when appropriate.
Tasklet
As seen from the above, Task Queue is based on Bottom Half; and Bottom Half is based on the newly introduced Tasklet in V2.4.x. The reason why the Tasklet is introduced, the main consideration is to better support SMP, improve the utilization of SMP multiple CPUs: Different Tasklets can run on different CPUs simultaneously. In its source annotation, there is also a point of features, and it is attributed to a point, that is: The same tasklet will only run on a CPU.
Struct Tasklet_struct
{
Struct Tasklet_struct * next; / * Queue Pointer * /
Unsigned long state; / * tasklet status, bitwise operation, currently defined two senses:
Tasklet_State_Sched (0th) or tasklet_state_run (1st) * /
Atomic_t count; / * Reference count, usually use 1 to DISABLED * /
VOID (* func) (unsigned long); / * Function pointer * /
Unsigned long data; / * func (data) * /
}
Comparing the above structure with tq_struct, it can be seen that Tasklet expands a point, mainly the State property, used for synchronization between CPUs.
Tasklet is quite simple:
Defines a handler void my_tasklet_func (unsigned long); DECLARE_TASKLET (my_tasklet, my_tasklet_func, data); / * define a tasklet structure my_tasklet, my_tasklet_func (data) function associated with equivalent DECLARE_TASK_QUEUE () * / tasklet_schedule (& my_tasklet); / * Register my_tasklet, allowing the system to schedule operation when appropriate, equivalent to Queue_Task (& MY_TASK, & TQ_IMMEDIATE) and Mark_BH (Immediate_BH) * /
It can be seen that Tasklet is easier than Task Queue, and Tasklet can better support SMP structures. Therefore, in a new 2.4.x core, Tasklet is a suggested asynchronous task execution mechanism. In addition to the above mentioned steps, the Tasklet mechanism also provides another call interface:
DECLARE_TASKLET_DISABED (Name, Function, Data); / * and declare_tasklet (), but even if it is scheduled to run immediately, you must wait until Enable * / tasklet_enable (STRUCT TASKLET_STRUCT *); / * Tasklet enable * / tasklet_disble (Struct Tasklet_struct *); / * Disable tasklet, as long as the tasklet has not yet run, it will be postponed to it is enabled * / tasklet_init (Struct Tasklet_struct *, void (* func) (unsigned long), unsigned long; / * Similar Declare_Tasklet () * / tasklet_kill *; / * Clear the adjustability bit of the specified tasklet, ie not allowed to schedule the tasklet, but do not make the cleaning of the tasklet itself * /
As mentioned earlier, in the 2.4.x kernel, Bottom Half is implemented using the tasklet mechanism, which is based on all the Bottom Half action, which is different from the Tasklet we generally used by us. In version 2.4.x, the system defines two Tasklet queues, each vector, corresponding to a CPU (the maximum number of CPUs supported by the system, and the current 2.4.2 of SMP mode is 32). Tasklet Link Picture:
Struct tasklet_head tasklet_vec [nr_cpus] __cacheline_aligned
Struct tasklet_head tasklet_hi_vec [nr_cpus] __cacheline_aligned
In addition, for 32 Bottom Half, the system also defines 32 Tasklet structures:
Struct Tasklet_struct BH_TASK_VEC [32];
When the Soft Sub-system is initialized, the action of this set of tasklets is initialized to BH_Action (NR), and BH_Action (NR) will call the function pointer of BH_BASE [NR] to hook the semantic hook of Bottom Half. Mark_BH (NR) is implemented to call Tasklet_SCHEDULE (BH_TASKLET_VEC NR), in this function, BH_TASKLET_VEC [NR] will be hooked on the tasklet_ign_vec [cpu] chain (where the CPU is the current CPU number, that is, which CPU is presented. The request of the Bottom Half executes this request on which CPU is executed, then the Hi_SoftIrQ soft interrupt signal is thrown, thereby starting operation in the interrupt response of Hi_SoftIRQ.
Tasklet_schedule will hill the my_tasklet to Tasklet_Vec [CPU], inspiring Tasklet_softirq, executed in the interrupt response of Tasklet_softirq. Hi_softirq and tasklet_softirq are the terms in the SoftIRQ subsystem, and the next section will be introduced.
Five. Softirq
As can be seen from the previous discussion, Task Queue is based on the Bottom Half, Bottom Half is based on Tasklet, while Tasklet is based on SoftIRQ.
It can be said that Softirq is the earliest Bottom Half thought, but on this "BOTTOM HALF" mechanism, a larger and complex soft sub-system has been implemented.
Struct Softirq_Action
{
void (* action) (Struct Softirq_Action *);
Void * Data;
}
Static Struct Softirq_Action Softirq_Vec [32] __CacheLine_aligned;
This SoftirQ_Vec [] is only more than BH_BASE [] to add the parameters of the Action () function, on execution, Softirq is less than the BOTTOM HALF.
Similar to Bottom HALF, the system also predefines several Softirq_Vec [] structures, expressed by the following enumeration:
ENUM
{
Hi_softirq = 0,
NET_TX_SOFTIRQ,
NET_RX_SOFTIRQ,
Tasklet_softirq
}
Hi_softirq is used to implement Bottom Half, Tasklet_softirq for public Tasklet usage, net_tx_softirq and net_rx_softirq for messaging of network subsystems. When the Soft Slay System Initialization (SoftirQ_init ()) calls Open_Softirq () to initialize Hi_Softirq and Tasklet_softirq: Void Open_SOFTIRQ (INT NR, VOID (* Action) (Struct Softirq_Action *), Void * Data)
Open_softirq () will populate SoftiRQ_VEC [NR], set Action and Data to incoming parameters. Tasklet_softirq filled with tasklet_action (null), Hi_Softirq fills to tasklet_hi_action (null), in the do_softirq () function, the two functions are called, start Tasklet_Vec [CPU] and tasklet_hi_vec [cpu] chain Tasklet run.
Static Inline void __cpu_raise_softirq (int CPU, INT NR)
This function is used to activate the soft interrupt, actually the Active bit 1 of the NZ soft interrupt of the CPU number CPU. This Active bit will be judged in do_softirq (). Tasklet_schedule () and tasklet_hi_schedule () will call this function.
DO_SOFTIRQ () has 4 execution timings, named: Return from the system call (Arch / i386 / kernel / entry.s :: entry), returned from the exception (Arch / I386 / kernel / entry.s: : RET_FROM_EXCEPTION Number, the scheduler (kernel / Sched.c :: Schedule ()), and after processing the hardware interrupt (kernel / Irq.c :: DO_IRQ ()). It will traverse all Softirq_Vec, start the action () sequentially (). It should be noted that the soft interrupt service program is not allowed to be executed in a hard interrupt service program, nor allowing nested in a soft interrupt service program, but allowing multiple soft interrupt service procedures to simultaneously concurrently simultaneously on multiple CPUs.
Six. Use example
As a substrate mechanism, Softirq is rarely used by the kernel programmer, so the examples here are only only for the remaining soft interrupt mechanisms.
1.BOTTOM HALF
The original Bottom Half usage can also be seen in drivers / char / serial.c, including three steps:
Init_bh (serial_bh, do_serial_bh); // In the initialization function rs_init () of the serial port device, do_serial_bh () is the processing function
Mark_bh (serial_bh); // In RS_SCHED_EVENT (), this function is called by the interrupt processing routine
REMOVE_BH (Serial_BH); / / Tune in the end function of the serial port device RS_FINI ()
Although logically still three steps, the action in the Do_Serial_BH () function is starting a task queue: run_task_queue (& tq_serial), and in rs_sched_event (), it is queue_task (..., & tq_serial), that is, serial port Bottom Half has been combined with Task Queue. Those who are more versatile, such as Immediate_BH, but also to use Task Queue, and in general, Task Queue is also very independent, but combined with Bottom Half, this is in the next season Task Queue use example It can be clearly seen in it. 2.Task Queue
In general, programmers rarely define Task Queue, but combined with Bottom Half, directly using system predefined tq_immediate, etc., especially with TQ_IMMediate use the most frequently used. Look at the following code segment, the excerpt from drivers / block / floppy.c:
Static struct tq_struct floppy_tq; // Define a TQ_STRUCT structure variable floppy_tq, do not need other initialization actions
Static void schedule_bh (void (* handler) (void *))
{
FLOPPY_TQ.ROUTINE = (void *) (void *) Handler;
/ / Specify the FLOPPY_TQ call function to be Handler, do not need to consider other domains in FLOPPY_TQ
Queue_task (& floppy_tq, & tq_immediate);
// Add FLOPPY_TQ to TQ_IMMEDIATE
Mark_bh (immediate_bh);
/ / Activate IMMEDIATE_BH, as described above,
This actually causes a soft interrupt to perform various functions mounted in tq_immediate.
}
Of course, we can also define and use our own task queue, without tq_immediate, tq_serial mentioned in drivers / char / serial.c is the serial port driver you defined:
Static Declare_Task_Queue (tq_serial);
At this point, you need to call Run_Task_Queue (& TQ_SERIAL) to start the function, so it is not commonly used.
3.Tasklet
This is a more powerful set of soft interrupts than Task Queue and Bottom Half. It is relatively simple in use. See the code segment below:
1: void foo_tasklet_action (unsigned long t);
2: unsigned long stop_tasklet;
3: Declare_tasklet (foo_tasklet, foo_tasklet_action, 0);
4: void foo_tasklet_action (unsigned long t)
5: {
6: // do something
7:
8: // Reschadule
9: if (! Stop_tasklet)
10: tasklet_schedule (& foo_tasklet);
11:}
12: void foo_init (void)
13: {
14: stop_tasklet = 0;
15: tasklet_schedule (& foo_tasklet);
16:}
17: Void foo_clean (void)
18: {
19: stop_tasklet = 1; 20: tasklet_kill (& foo_tasklet);
twenty one: }
This relatively complete code segment utilizes a repeated Tasklet to complete a certain job, first define the foo_tasklet on the third line, associated with the corresponding action function foo_tasklet_action, and specify the parameter of the foo_tasklet_Action () is 0. Although 0 is parameters, it can also specify other other parameter values, but it should be noted that this parameter value must be a fixed value variable or constant (such as an example) when defined. That is to say Define a global variable to pass its address as a parameter to foo_tasklet_Action (), for example:
Int flags;
Declare_tasklet (foo_tasklet, foo_tasklet_action, & flags);
Void foo_tasklet_action (unsigned long t)
{
INT FLAGS = * (int *) t;
...
}
This allows the information into the tasklet by changing the value of the FLAGS. Fill in the Flags directly at the Declare_Tasklet, the GCC will report "Initializer Element IS Not Constant" wrong.
The 9th, 10 lines are the technology of RescHedule. We know, after a tasklet executes, it deletes it from the execution queue. If you want to re-let it transfer, you must re-call tasklet_schedule (), the time that the call can be, it can be like an event. This is in the Tasklet action. This reschedule technology will cause tasklets to run forever, so there should be a way to stop Tasklet when you quit. Stop_tasklet variable and tasklet_kill () are doing this.
Reference:
"Linux kernel source code analysis", Maunde, Hu Ximen, September 2001 Zhejiang University Press "Linux Kernel 2.4 Source Code Analysis Daquan", Li Songping Waiting, January 2002 Mechanical Industry Press "Linux Device Drivers ", Alessandro Rubini & Jonathan Corbet, August 2001 O'Reilly