Continued and use of memory management and uses modularization (reading notes)

xiaoxiao2021-03-06  45

Blocking and non-blocking memory functions actually, the design of a good memory allocation function should allow a BLOKING Forever, a time limit blocker, or a no blocking at all.

A blocking memory allocation function can be implemented using a counting selection or a mutex lock (MUTEX LOCK). The allocation request must first successfully obtain the amount of quantity, then obtain the mutually exclusive semaphore lock.

When the blocking memory is open, the number of quantities and mutually exclusive locks will eliminate the problem of priority inversion. That is, the operation between the allocation and recycling is an uninterrupted primitive. Then, the task read data is resolved with the number of signals and the mutual exclusive signal, or when reading a post-change, it also prevents a lower task from occupying memory resources, while the higher tasks are not obtained by the memory resources. The priority inverting problem of forced blocking.

Hardware memory management unit

It involves virtual memory management. This is a technology with mass storage (mass store. This feature allows you to run a larger program than memory. (Sacrifice time) The specific content reads my notes must know. Memory Management Unit MMU (Memory Management Unit) provides several features. First, the MMU converts virtual addresses to physical addresses each time memory, followed by MMUs to provide memory protection.

If an MMU is open in the embedded system, the physical address is typically divided into a page. Hardware enhances memory access based on page properties. Attempting to write a read-only memory area, a memory access is triggered.

Said so many memory management, in fact, only 128B or 256B memory space in the operating environment of MyOS (51). Less pity! But this can reveal future Myos's other processor versions!

Use concurrently to modularize

Many activities must be completed when designing real-time system applications. A set of activities requires identifying the determined elements. In the end, the design team must consider how to decompose the application to the concurrent task.

The following will be how to decompose and precautions

Method for decomposing applications from tables and in

In most cases, the designer adheres to a set of needs before the real-time embedded system starts. If the demand is not clear, the first one of the first actions guarantees that multiple needs are fixed. Embodiments must also be enriched. Detailed requirements should be obtained from the document, such as software requirements specifications SRS (Software Requirement Specification)

The method of the table and in the table and in the method (Outside-in approach) follows the input and output of a process identification system, and represents it as a simple high-level context diagram. * The middle circle representation software applications * Enter and output devices around the surrounding rectangles * Arrow labels, represent the flow of input and output communication.

Guidance principles and suggestions for identifying concurrency

The method of decomposing the application by the table and in the table is an example of a practical method, and there are other variety of design methods. This approach determines certain clear requirements for handling practices and action. Further refinement of this block diagram will result in additional tasks.

Concurrent unit:

It is very important to use an application concurray to a managed unit. A unit of concurrence can be a task or a process, or any dispatchable, can compete with the CPU processing time execution thread. (Although ISR cannot be running with other routines, it still uses it as a concurrent design)

The decomposition process is mainly optimized in parallel, maximizing the performance and responsiveness of real-time applications.

Pseudo and true concurrent execution:

The single processor is mainly performed in a pseudo-oriented execution because only one program counter (also called the instruction pointer) on a CPU can be used, so there is only one instruction run at any time. In the case of multiprocessor, basic RTOS is typically distributed. At this time, the entire system is real in real, but each CPU is still executed in a pseudo-all.

Guidance principles when designing concurrency tasks:

Principle 1: Labeling the dependence of equipment

My understanding is to divide external equipment into two active and passive. Then, according to the characteristics of different types of equipment, the assignment tasks are then compiled. The mechanism for interrupt generation devices is identical to I envisaged. The ISR is designed to handle the interrupt generated by multiple devices, notifying one of the service programs for the task level, by one communication mechanism. Of course, this task level service program is given to the highest task priority.

The proposal for active equipment is as follows:

1: Assign a separate task for a separate active asynchronous I / O device.

2: Task combination of I / O devices that do not often generate interruptions and have a long deadline.

3: Assign a separate task for devices with different input and output rates. Because the rate of high rates of I / O devices are shorter than that of allowance.

4: Assign a higher priority to the task associated with the interrupt generating device.

5: Assign a resource control task to control access to I / O devices.

6: An event distribution task is requested for I / O devices that must be handed over to multiple tasks.

The recommendations for passive equipment are as follows:

1: When communicating with these devices, the deadline is very distant, assign a single task and passive device interface.

2: Assignment The plurality of vote tasks are sent to the passive device to send a periodic request.

3: Trigger a table request via the timer event.

4: Assign a relatively high priority to vote to have a relatively low cycle task. But it should be used with caution because too much vote leads to the sharp increase in the load.

Principle 2: Identify the dependence of the event

Principle 3: Dependency of the logo time

A: Identify key and emergency activities

B: Identify different periodic execution rates

C: Identify temporary aggregation

Principle 4: Identify calculated boundary activity Calculative boundary activity is often even less prioritized. Because they have distant deadlines and long operation time. They get time to run when there is no need for a more critical task.

Principle 5: Identify functional aggregation

Functional aggregation requires collected relatively tight execution activity enforcement groups or code columns into a separate task. Also, if the two tasks are closely coupled (delivering a lot of data), it should also consider merged into a task.

Principle 6: Identify tasks for special purpose

Principle 7: Identification order aggregation

Further emphasis on the needs of sequential operations in order to further emphasize

Scandable analysis and rate monotonic analysis

Scandable analysis determines if all tasks can be scheduled to run and meet their deadlines established by the scheduling algorithm while still obtaining an optimized processor utilization. note! The analysis only observes how to meet timing requirements, not functional requirements.

The test method has the basic scandable test of RMA, the formula is as follows:

C1 / T1 ... CN / TN <= u (n) = n (2 ^ (1 / n) -1) 1 <= i <= n

CI is the worst execution time related to periodic task I

Ti is the cycle related to task I

n is the number of tasks

When the inequality is established, all tasks can meet the deadline.

The extended RMA adjustability test, the formula is as follows:

C1 / T1 ... CI / Ti Bi / Ti <= u (i) = i (2 ^ (1 / I) -1) 1 <= i <= n

BI is the longest blocking time that is experienced

Inequality Found Description The i-th task can satisfy the deadline, and it is scheduled.

转载请注明原文地址:https://www.9cbs.com/read-74817.html

New Post(0)