[Guide: The basic theory of the locks used in various large databases is consistent, but there are differences in specific implementation. SQL Server strides to manage locks by the system. When the user has SQL request, the system analysis request is automatically added to the appropriate lock between the lock condition and system performance, and the system is automatically optimized during operation, and dynamically locking. For a general user, the automatic lock management mechanism through the system can basically meet the usage requirements, but if there is special requirements for data security, database integrity and consistency, you need to understand the lock mechanism of SQL Server to master the database locking method. ] Lock is a very important concept in the database, which is mainly used to ensure database integrity and consistency in multi-user environments. We know that multiple users can manipulate data in the same database, and data inconsistencies will occur. That is, if there is no lock and multiple users accesses a database, problems may occur when their transactions use the same data simultaneously. These issues include: Lost update, dirty reading, not repeated read and illusion read: 1. When two or more transactions select the same row, then lose update issues when updating the row based on the initial selected value. Every business does not know the existence of other transactions. The last update will rewrite the update made by other transactions, which will result in data loss. For example, two editors have produced electronic copy of the same document. Each editor can independently change its replica, then save the changes after the changes, so that the original document is overwritten. Finally, the editors who have saved their changes have covered the changes made by the first editor. If the second editor can be changed after the first editor is completed, the problem can be avoided. 2. Dirty reading is to refer to a transaction is being accessed, and the data is modified, and this modification has not been submitted to the database. At this time, another transaction also accesses this data and then uses this data. Because this data is not submitted data, then another transaction reads this data is dirty data, and the operation made by dirty data may be incorrect. For example, an editor is changing the electronic document. During the changes, another editor copys the document (this copy contains all all changes made so far) and distributes it to the expected user. Since then, the first editor thinks that the changes currently do are erroneous, so that the editing is deleted and the document is saved. The document that distributes to the user contains no longer existing editing, and these editing content should be considered that there is no existence. This problem can be avoided if anyone can't read the changes before the first editor determines that anyone can read the changes. 3. It is not repeatable to read the same data multiple times in one transaction. Another transaction also accesses the same data at this business yet. Then, between the two read data in the first transaction, the data read in the first transaction twice can be different due to the modification of the second transaction. This happens that the data read twice in a transaction is different, so it is called whether it is not repeatable. For example, an editor reads the same document twice, but between two reads, the author rewrites the document. When the editor reads the document for the second time, the document has changed. Original reading is not repetitive. If you can read the document only after the author is completed, you can avoid this issue. 4. Illustrative reading is a phenomenon that occurs when the transaction is not independent execution, such as the first transaction modifies the data in a table, which involves all the data lines in the table. At the same time, the second transaction also modifies the data in this table, which is inserted into the table new data. Then, in the future, the user finds the first transaction, there is still a modified data, just like an illusion.
For example, an editor changes the author submitted by the author, but when the production department merges its changes to the primary copy of the document, it is found that the author has added the unbounded new material to the document. If anyone can add new materials to the documentation before the editorial and production department completes the new material, it can be avoided. Therefore, the method of handling multiple users concurrent access is to lock. The lock is a primary means to prevent other transactions from accessing the specified resource control to implement concurrent control. Other users can no longer access the object when a user locks an object in the database. The effect of locking the concurrent access is reflected in the particle size of the lock. In order to control the locked resources, you should first understand the spatial management of the system. In the SQL Server 2000 system, the smallest spatial management unit is a page, one page has 8K. All data, logs, indexes are stored on the page. In addition, using the page has a limit, this is the row of data in the table must be on the same page and cannot be crosspage. The spatial management unit on the page is a panel, and a panel is 8 consecutive pages. The minimum occupancy unit of the table and index is a panel. The database is composed of one or more tables or indexes, that is, consisting of a plurality of panels. Place the lock limit on a table to the entire table; the lock on the panel limits access to the entire panel; the lock on the data page limits access to the entire data page; put it on the line The lock is limited to the concurrency access to the line. SQL Server 2000 has a multi-size lock that allows a transaction to lock different types of resources. In order to minimize the cost of the lock, SQL Server automatically locks the resource to the level of the task. Locking at a smaller particle size (eg, line) can increase concurrency but require a large overhead, because if many rows are locked, you need to control more locks. Locking on a larger particle size (eg, a table) is quite expensive because the locking of the entire table limits other transactions to access any part of the table, but the required overhead is low because there is less lock that requires maintenance. SQL Server can lock rows, pages, expansion panels, tables, libraries and other resources. The line is the minimum space that can be locked. The data resources occupied by the row-level lock are least in the process of transaction, allowing other transactions to continue manipulating the same table or other data on the same page, greatly reduces other transactions waiting processing. Increase the concurrency of the system. The page lock refers to the transaction processing, and each time you lock a page, the data on this page cannot be manipulated by other transactions. Before SQL Server 7.0, the page level lock is used. Page-locking resources are more data resources that are locked by the row-level lock. In the page lock, even if a transaction only manipulates a line of data on the page, then other data rows on this page cannot be used by other transactions. Therefore, when using the page level, the data is wasted, that is, data is occupied on the same page, but there is no use. In this phenomenon, the waste of data does not exceed the data line on one page. Table-level locks are also a very important lock. The table-level lock refers to the entire table where the transaction is operated in the manipulation of a table, and other transactions cannot access other data in the table. When the data volume of the transaction is relatively large, a table-level lock is usually used. The table-level lock is characterized by a relatively small system resource, but it takes up more data resources. Compared to the row lock and page lock, the system resources used by the table-level lock, such as more memory, but the occupied data resources are the largest. In the table-level lock, there may be a lot of waste of data, because the table-level lock locks the entire table, and other transactions cannot operate other data in the table. A panel lock is a special type of lock that can only be used in some special situations. Cluster locks refer to a panel that the transaction takes up, and this panel cannot be occupied by other transactions. For example, when creating a database and creation table, the system is used to use this type of lock when the physical space is assigned. The system is allocated according to the disk.
When the system allocates space, use the panel lock to prevent other transactions from using the same panel simultaneously. This type of panel lock is no longer used after the system completes the allocation space. In particular, when transaction to data operations is involved, the panel lock is not used. The database-level lock refers to locking the entire database to prevent any user or transaction from accessing the locked database. The database-level lock is a very special lock, which is just during the recovery operation of the database. This level of lock is the highest level of lock because it controls the operation of the entire database. As long as the database is restored to the database, you need to set the database as a single user mode so that the system can prevent other users from performing various operations on the database. The row-level lock is the best lock because the row-level locks cannot occur, and the data is occupied and has no waste of use. However, if the user's transaction is frequently operated in a table, it will lead to many records of the table, and the number of locks in the database system will increase sharply, which increases the system. Load, affect system performance. Therefore, in SQL Server, a lock upgrade is also supported. The so-called lock reliance refers to the particle size of the lock, replaces the plurality of low-grain-based locks into a small number of higher particle size locks to reduce the system load. When SQL Server is more locked in a transaction, when the lock upgrade threshold is reached, the system automatically upgrades the row-level lock and page lock to a table-level lock. It is particularly worth noting that in SQL Server, the lock upgrade threshold and the lock upgrade are automatically determined by the system, and no user settings are required. When you lock in the SQL Server database, in addition to locking different resource locks, you can also use different levels of locking methods, namely, multiple modes, SQL Server interlocking mode includes: 1. To share lock SQL Server, shared locks for all read-only data operations. The shared lock is non-exclusive, allowing multiple concurrent transactions to read their locked resources. By default, after the data is read, SQL Server releases the shared lock immediately. For example, when the query "Select * from authors" is executed, first lock the first page, after reading, release the lock to the first page, then lock the second page. In this way, the first page that is unlocked is modified during the read operation. However, the transaction isolation level connection option settings and lock settings in the Select statement can change this default setting for SQL Server. For example, "Select * from authors holdlock" requires the lock of the table to keep the table in the entire query until the query is completely released. 2. Update Lock Update Lock In the initialization phase of the modification operation, it is used to lock resources that may be modified, which avoids deadlocks caused by shared locks. Because of the use of shared lock, modify the operation of the data is divided into two steps, first get a shared lock, read the data, and upgrade the shared lock to a row lock, then perform the modification operation. This way, if there are two or more transactions, these transactions are upgraded to a row lock while modifying data at the same time. At this time, these transactions will not release the shared lock but wait for the other party to release, which causes a deadlock. If a data applies for update locks before modifying, the deadlock can be avoided when it is upgraded to a lock when the data is modified. 3. It is reserved for modifying data. It does not be able to read the resources that it locked. 4. Data Definition Language (DDL) operation (DDL) operation (DDL) operation (for example adds a column or removing table). When compiling queries, the architecture stability (SCH-S) lock is used.
Architectural Stability (SCH-S) lock does not block any transaction lock, including the row lock. Therefore, when compiling queries, other transactions (including transactions with rows on the table) can continue to run. However, DDL operation cannot be performed on the table. 5. Intentional Lock Oscillation Description SQL Server has the intention of a shared lock or a locking lock in the low level of resources. For example, the shared intent hypothesis of the table-level means that the transaction intent will be unlocked to the list or rows in the table. The intent lock can also be divided into shared intent locks, exclusive intent locks and shared exclusive intent. Sharing intent lock illustrates the transaction intent to place shared locks on the low resource locked by the shared intent lock. Exclusive intent lock indicates that the transaction is to place the lock to modify the data on the low resource that is locked by the shared intent lock. Shared row lock Description Transactions Allow other transactions to use shared locks to read the top resource and intend to place a row lock on the low level of the resource. 6. Large-capacity update lock When you copy data large capacity to a table, and specify a TabLock prompt or use sp_tableOption to set up a large-capacity update lock when setting the Table Lock on Bulk table option. Large-capacity update lock allows the process to copy data into the same table to the same table while preventing other processes that do not perform large-capacity copy data from accessing the table. SQL Server system is recommended to automatically manage the lock, which analyzes the user's SQL statement requirements, automatically adds a suitable lock for the request, and the system will automatically lock up. As mentioned earlier, the upgrade threshold is automatically configured by the system and does not require user configuration. In practical applications, sometimes it is possible to lock a table for the database for the application correctly and maintains the consistency of data. For example, in a transaction operation in an application, you need to do statistical operations based on a number, in order to ensure the consistency and correctness of the statistics time, from the first table of statistics to all the tables, others Applications or transactions can no longer write data for these tables. At this time, the application wishes to lock this in the first data table from statistics or at the beginning of the transaction. A table, which requires a manual lock (also known as an explicit lock) technology. You can specify the scope of the table-level lock prompt using SELECT, INSERT, UPDATE, and DELETE statements to guide Microsoft SQL Server 2000 to use the desired lock type. When you need to perform more fine control of the lock type of the object, you need to change the default lock behavior using the table-level lock prompt. The specified table-level lock prompt has the following: 1. Holdlock: Keep the shared lock on the table until the end of the entire transaction, not the addition of the added lock. 2. NOLOCK: Do not add shared locks and row locks, when this option takes effect, you may read data or "dirty data" that is not submitted, this option is only applied to the SELECT statement. 3. PAGLOCK: Specifies the addition of the page lock (otherwise the table is usually added). 4. ReadCommitted is scanned with the same lock semantic manager as running the read isolation level. By default, SQL Server 2000 operates at this isolation level. . 5. ReadPast: Skip the locked data line, this option will make the transaction to read the data rows that have been locked by other transactions, not blocking until other transaction release locks, readpast is only applied to the read committed isolation level SELECT statement operation in the transaction operation. 6. ReadunCommitted: Is equivalent to NOLOCK. 7. REPEATABLEREAD: Sets the transaction to reusable interval level.
8. Rowlock: Use a row-level lock without using a packet lock and table-level lock with a large particle size. 9. SERIALIZABLE: Scanning with the same lock semanting as running in a serial reada level. Also equivalent to Holdlock. 10. Tablock: Specifies the use of a table-level lock instead of using a chart or page-level lock, SQL Server is released after the statement is executed, and if Holdlock is specified, the lock has been kept until this transaction ends. 11. Tablockx: Specifies the use of a row lock on the table, which prevents other transactions from read or updates the data of this table until this statement or the entire transaction ends. 12. UPDLOCK: Specifies the Update Lock when data is read in the table, not set the shared lock, which keeps the statement or the end of the entire transaction, using the UPDLOCK's role is to allow users to first read data (and do not block other users) Read data) and guarantees that this data is not modified by other users during this time when it is later updated. The deadlock problem is in the database system, and the deadlock refers to multiple users (processes) to lock a resource, and try to request locking the other party has locked resources, which produces a lock request ring, causing multiple users (process ) All waiting for the other party to release the locked resource. This deadlock is the most typical deadlock form, such as two transactions A and B, including two operations: Lock Table Part and request access table Supplier; transaction B has two operations: lock table Supplier and request access table part. As a result, a deadlock occurs between transaction A and transaction B. The second case of deadlock is that when in a database, there are several long running transactions perform parallel operations, and when the query analyzer processes a very complex query, such as connection queries, then due to cannot be controlled The order of order is likely to have a deadlock. In SQL Server, the system can automatically search and process deadlock problems. The system identifies all process sessions waiting to be locked in each search. If the logo in the next search is still waiting, SQL Server begins to recurrent deadlock searches. When the search detects the lock request ring, SQL Server can end the dead lock by automatically selecting threads (dead locks) that can break the deadlock. SQL Server rolls back as a transaction of the deadlock victim, informs the thread (by returning 1205 error message), cancel the current request of the thread, and then allows the uninterruptible thread to continue. SQL Server usually selects the thread that spends the least transaction when running withdrawal as a deadlock victim. In addition, the user can use the SET statement to set the seadLock_Priority to LOW. The DeadLock_Priority option controls how to measure the importance of a session in the event of a deadlock. If the session is set to Low, it will become the preferred victim when the session is in a deadlock. Understand the concept of deadlocks, you can use some of the following methods in your application to avoid deadlocks: (1) reasonable arrangement of table access order. (2) Try to avoid user interventions in the transaction, try to make a transaction to handle the task, keep the transaction short and in a batch. (3) Data Access Time Domain Discrete Method, data access time domain discrete method refers to the client / server structure to take a variety of control means to control object access time periods in the database or database. Mainly implemented in the following way: reasonable arrangement of background transactions, unified management of background transactions.