Concurrently

zhaozj2021-02-16  96

Translation: Taowen (Taowen.bitapf.org) Original: "INDY IN Depth" Concurrency In multi-threaded environment, resources must be protected so that they will not be damaged because of a thread access.

Concurrent and threads are entangled in each other, and it may be difficult to choose which one will learn. This article will first talk concurrency, it will prepare some knowledge to learn from the latter learning thread.

the term

Concurrently

Concurrency is such a state - many TASK starts at the same time. When concurrent is implemented, it may be considered "harmony". When the realization is bad, it will become "chaos".

In most cases, the Task refers to threads. However, Task can also be a process or fiber.

The boundaries between the two are usually clear, and the use of appropriate technologies is the key

Constention

What is the exact constant? Constent is when more than one task is trying to access the alone in the same resource.

If you grow up in a big family, this metaphor can explain it. Think about there are six children in the family, and my mother put a small pizza as a dinner on the table. What happened. That is the meaning of contention.

Whenever a plurality of concurrent Task needs to access data in a read / written manner, the access to the data must be controlled to protect its integrity. If the access is not controlled, two or more TASK may "crash". When one of them attempts to read the variable, the other may be written to it at the same time. If a task is written, and the other is reading, the read TASK may read the partially written data to obtain damaged data. Generally, such an operation will not immediately lead to an abnormality, but will only bring an error to the program after this.

Constention problems often do not appear in low traffic IMPLEMentation, so there is no problem in the development phase. Therefore, suitable technical and pressure tests should be used during the development phase. Otherwise, there will be some like playing Russian Roulette, and the problem is only occasionally occasionally appearing in the development phase but has become frequent in the deployment phase.

Resource protection

Resource protection is a solution to prevent problems caused by Contention. The purpose of resource protection is to access only a Task to access the specified resource at a time.

Solve Contention

Whenever, as long as multiple threads need to access data in read / write, all access to data must be controlled to protect its integrity. This may Intimidating for programmers who are not familiar with thread operation. However, most of the server does not require global data. These programs typically need to read data after initialization during startup. As long as there is no write operation, the thread can not read global data for any side effects.

The following is to solve the most common way of contetion.

Read-only

The easiest way is to read only. Any simple type (integer, string, memory) access does not require any protection in read-only way. This can also be extended to many complex types such as TLISTS. As long as they do not access any global or member variables in a read / write, the type is safe when only read mode.

In addition, resources can be rewritten before any possible read operation. This allows for initialization of resources before reading its Task startup.

Atomic operation

One way is to say that if the operation is atomic, resources do not be protected. Atomic operation is such a one, it is too small to be separated by the computer processor. Because of its size, it will not be affected by Contentiion because it will be executed by itself and there will be no TASK switching during execution. In general, Atomic operations are source code for compiling as a compilation instruction.

Typical tasks such as reading or writing a constant or Boolean variable is considered an Atomic operation because they are compiled into a Move instruction. However, I recommend that you absolutely don't rely on atomic operations, as in some cases, even if you write a whole or Boolean variable, you can include more than one action, you have to watch the data first is from where you read. In addition, this also depends on the mystery inside the compiler, and this may make changes without inform you. Depending on the source code level Atomic operation will produce a problem with a problem with a problem and may be very different on a multiprocessor machine or other operating system. I have seen an iron-causing Atomic operation. However, a very prominent's future incident proves my point of view, that is .net. Your code is first compiled into IL, but then compile as machine code, may you still be confident that your code is ultimately atomic operations in different places?

Choosing the ultimate still want to see yourself, of course, there are many sounds around the ATOMIC operations and opposition. In most cases, dependent on atomic operations only saves a few milliseconds, and a few bytes of code. I strongly recommend not to use Atomic operations because they have the benefits of it. This is so much like LIABILITIES. Treat all operations as non-Atomic operations.

Operating system support

Many operating systems provide support for very basic thread safety operations.

Windows supports a function called Interlocked function. The use of these functions is very limited and only includes simple operations of integers, such as step increments, steps, plus, SWAP, and SWAP-COMPARE.

The number of functions is related to the version of Windows, and there may be a deadlock on the low version of Windows. In most applications, the benefits of them provide very little.

Because of the integrated, limited, changing support, poor performance advantage, it is recommended that you use indey thread safety equivalent to replace.

Windows also includes support for special IPC (inter-process communication) objects, which have packaged classes in Delphi. These objects are useful for thread operation extremely useful as the IPC.

Explicit protection

Explicit protection includes each Task to know that a resource has been protected and an explicit defense step is taken before accessing this resource. Generally, such a code type is in a function that is executed by multiple TASK, or is encapsulated into a package that is called by many different locations.

Show protection usually uses resource protection objects. Simply, resource protection objects put the access to resources in order to once. Resource protection objects do not actually limit access to resources, if it is done, it may have to know the details of each and all resource types. It is like traffic lights, and the code is to obey it and give it an input. Different kinds of traffic lights are achieved with different mechanisms, different inputs, and extra an additional burden of varying degrees. This makes it possible to choose different resource protection objects to better adapt to different types of resources and different occasions.

Resource protection objects are present in different forms, and below will be introduced one by one.

Critical section

Critical Section can be used to control access to global resources. Critical section is a lightweight and implemented in the VCL in TCRITICATION. Simply, Critical Section makes a thread in multithreading programs to temporarily block all other threads trying to use the same critical section. Critical section is like a traffic light, only when there is no vehicle in front of the road. Critical section can be used to ensure that only one thread is executing that piece of code at a time. Therefore, the code protected by Critical Section should be as small as possible because if it is improper, they may seriously affect performance. So, each code should use their own TcriticalSECTION instead of reusing the TCRITicalSECTION shared by the full program. To enter critical section, use the Enter method, and the Leave method is used to exit critical section. TcriticalSECTION also has an Acquire and Release method to do things as ENTER and Leave.

Suppose there is a server that needs to record the information about the login, and to display this information in the main thread. A possible option is to use Synchronize. However, use this method to have a negative impact on the performance of the connection thread when there are many customers logins. Depends on the needs of the server, a better option may be a record of information and let the main thread use Timer to read this information. The following code is an example of this technique using critical section.

varGLogCS: TCriticalSection; GUserLog: TStringList; procedure TformMain.IdTCPServer1Connect (AThread: TIdPeerThread); vars: string; begin // Usernames: = ReadLn; GLogCS.Enter; tryGUserLog.Add ( 'User logged in:' s); finally GLogCS .Leave; end; end; procedure TformMain.Timer1Timer (Sender: TObject); beginGLogCS.Enter; trylistbox1.Items.AddStrings (GUserLog); GUserLog.Clear; finally GLogCS.Leave; end; end; initializationGLogCS: = TCriticalSection.Create; GUserLog: = TstringList.create; FinalizationFreeandnil (GUSERLOG); FreeAndnil (GlogCS); END.

In the Connect event, the username is read into a temporary variable before entering Critical Section. This is to avoid possible slowing down by blocking Critical Section. This allows network communication to be executed before entering critical section. In order to make the best performance, the less code in the critical section should be, the better.

The Timer1Timer event triggered a timer on the main form. The time interval of the timer can be shortened to reach the update more frequent purpose, but may reduce the speed of the accepted connection. If the functionality of the log is extended to the other places of the server, it is not only the user's connection, which has exacerbated the possibility of generating bottlenecks. There is less time for a shorter time, and there is less time required to update the user interface. However, many servers do not have a user interface at all, even if it is also the second bit, much lower than the priority of the service client, so this is a very well-acceptable trade-off.

Tcritical Section is located in Syncobjs Unit. Syncobjs unit is not included in the standard version of Delphi 4. If you are using the standard version of Delphi 4, there is a syncobjs.pas on Indy's website, which does not implement all content in Borland's syncobjs.pas, but implements the TcriticalSECTION class. TmultireadexClusiveWritSynchronizer (TMREWS)

In the previous example, TCRITICATION is used to protect access to global data. In those cases, global data is only updated. However, if global data is sometimes read-only access, using TmultireadexClusiveWritSynchronizer may generate more efficient source code. TmultireadexClusiveWritSynchronizer is a lengthy and hard-read class. Therefore it will be simply referred to as Tmrews.

The advantage of using TMREWS is that it allows multiple threads to read, while only one thread access is allowed to read as Critical Section. Disadvantages are the cost of Tmrews.

No longer Enter / Acquire and Leave / Release, Tmrews have methods Beginread, Endread, BeginWrite, and Endwrite.

Special instructions about TMREWS

Prior to Delphi 6, TmultireadexClusivewritesynchronizer could cause a deadlock when it becomes a Write Lock from a read lock. So you must never use the READ LOCK to become a Write Lock feature, even if the document says it can be done.

If you need this feature, there is a compromise. That is to release the read lock and then get Write Lock. However, once you get Write Lock, then you must check first for forcing you to use a Write Lock condition. If it still exists, do things that you need to do, otherwise you immediately release the Write Lock.

When using Delphi 6, TmultireadexClusivewritesynchronizer has a particularly considered place. All versions of TmultireadexClusiveWritSynchronizer, including Update Pack 1 and Update Pack 2, are likely to cause a serious problem of deadlocks. There is no known solution. Borland knows the existence of this problem and has released an unofficial patterner and may issue official patches.

TMREWS in Kylix

The TmultireadexClusivewritesynchronizer in Kylix 1 and Kylix 2 is implemented with Critical Section and will not have any advantages than using Critical Section. However, it is included in order to be used for linux and windows simultaneously. In the future version of Kylix, TmultireadexClusivewritesynchronizer may be upgraded to be the same as it is in Windows.

Select between Critical Section and Tmrews

Because Tmrews has been wrapped in the problem, my suggestion is very simple, just avoid using it. If you decide to use it, you should be sure that it is really a better choice and you have obtained a version that no longer generates a deadlock behavior.

In most cases, the appropriate application of TcriticalSec can produce almost the same effect, and in some cases faster. Learn to optimize your TcriticalSection in the necessary place, because improper use of TcriticalSection will have a serious negative impact on performance. The key to any resource protection problem is to use multiple resource Controller and make the lock area as small as possible. When you can do this, you should always use critical section because it is lightweight and faster than Tmrews. In general, unless you can clearly determine the use of Tmrews, you always use critical section.

The TMREWS class is better when the following conditions are satisfied:

1. Accessing includes reading and writing 2, reading is the main 3, and the time to lock must be extended to maintain, and cannot be broken down into smaller blocks. 4, the TMREWS class is appropriately patch, and it is known to work normally.

Performance comparison

As mentioned earlier, Critical Section is more lightweight and the speed is faster. Critical section is implemented by the operating system. The operating system is implemented using very fast and streamlined assembly code.

The TMREWS class is more complicated and thus brings more additional burdens. It must manage the requester's list to reasonably manage the locking mechanism of the dual state.

To show these differences, an example item called ConcURRencySpeed.dpr is created. It performs three simple bellyssments: 1, TcriticalSection - Enter and Leave2, Tmrews - Beginread and endread3, Tmrews - Beginwrite and endwrite

Test is running them a certain number of times in a count cycle. For testing purposes, the default is 100,000 times. In my test, the following results have been made (millisecond): TcriticalSECTION: 20Tmrews (Read Lock): 150TMREWS (Write Lock): 401

Natural These measurements are related to the machine. However, the difference between them is important, not exact numbers. It can be clearly seen that Tmrews's read lock is 7.5 times slower than critical section, while Write Lock is 20 times slower.

It should also be noted that Critical Section has only one result, and the performance of Tmrews will decline when concurrently. The tests performed here are simply in a loop, and there is no other requester to deal with the request or have existing locks that require TMREWS. In the actual situation, TmRews may be slower than the numbers displayed here.

Mutex

The features of Mutex and critical section are almost consistent. The difference between Mutex is that it is a enhanced critical section with more features, of course, additional burden is more.

Mutex is like a naming, assigning security properties, access to such additional functions between processes.

Mutex can be used between threads, but very few such. Mutex is designed to communicate between processes and is usually useful.

Semaphore

Semaphore is similar to Mutex, but it is not just an enTrant, which allows multiple entrant. The number of entrant can be specified when Semaphore is created.

I don't think Mutex is a security guards that are guarding bank cash cash (ATM). One person can use it, but the security guard is defending the machine and does not allow a team to use it at the same time.

If you have 4 ATMs installed, the Semaphore may be able to behave. In this case, the security guard may allow 4 people to enter and use ATM once, but it cannot be more than 4 people.

Event

Event is a signal that has occurred between threads or processes to notify something. Event can be used to notify other Task when something is done or need to intervene.

Thread safety class

Thread security classes are classes specifically designed to protect specific types of resources. Each thread security class has realized a type of resource and what is the resource and how it has good cognition.

Thread security classes can be simple, such as thread security, can also be complex, such as thread secure databases. The thread secure class uses thread security objects to complete their features.

CompartmentAlization

CompartmentAlization is the process of separating data and assigns it to a single task. For servers, Compartmentalization is often natural because each client can be processed by a special thread.

When Compartmentalization is not natural, it should take the consideration to see if it can be done. CompartmentAlization can often be processed by copying the overall data, and then returning the result to the global area. By using Compartmentalization, the data lock only occurs when initialization and task end or batch updates.

转载请注明原文地址:https://www.9cbs.com/read-11146.html

New Post(0)