Sybase is a world famous database manufacturer, and its relationship database products Sybase SQL Server has a large number of users in China's large and medium-sized enterprises and institutions. During many years of use, the author summarizes some experiences of Sybase database management and maintenance, and they are now shared with you.
We know that Sybase SQL Server uses a transaction to track changes in all databases. The transaction is the work unit of SQL Server. A transaction contains one or more T-SQL statements as an overall execution. Each database has its own transaction log, that is, system table (Syslogs). The transaction log automatically records each transaction issued by each user. The log is critical to the data security and integrity of the database, and we must know the relevant knowledge of database development and maintenance.
I. How to record and read log information for Sybase SQL Server
Sybase SQL Server is the mechanism of the first law log. Whenever the user performs the statement that will modify the database, SQL Server automatically writes the change to the log. All changes generated by a statement are recorded after logging, they are written to the data page in the copy of the buffer. This data page is saved in the buffer until the other data page requires the memory, the data page is written to the disk. If a statement in the transaction is not completed, SQL Server will return all changes generated by the transaction. This ensures consistency and integrity of the entire database system.
Second, log equipment
Log and database DATAs, you need to store on the database device, you can store logs and data on the same device, or store it separately. In general, a database's DATA and LOG should be stored on different database devices. This is the following benefits: First, you can back up the Backup transaction logs separately; the other is to prevent the database from being full; the third is to see the space usage of logs.
The size of the LOG device built, there is no very accurate method to determine. In general, for the newly built database, the size of the log should be about 30% of the database size. The size of the log also depends on the frequency of the database modification. If the database is frequent, the growth of logs grow very quickly. Therefore, the log space is dependent on how the user uses the database. In addition, there are other factors that affect the log size, we should estimate the log size according to the actual operation, and back up and clear the log at intervals.
Third, the clearance of the log
With the use of the database, the database's log is growing, and they must be removed before it is full space. There are two ways to clear log:
1. Automatic cleaning
Open Database Option Trunc Log On Chkpt, enabling the database system to automatically clear the log every other period of time. The advantage of this method is that there is no need to manually intervene, and the SQL Server is automatically executed, and the LOG is not full, the disadvantage is that only LOG is only cleared without the backup.
2. Manual cleaning method
The command "dump transaction" is executed to clear the log. The following two commands can clear the log:
DUMP Transaction With Truncate_only DUMP Transaction with no_log
Often to delete the inactive sections in the transaction log You can use the "Dump Transaction With Trance_only" command, which is necessary to use the necessary concurrent check. Sybase provides "dump Transaction with no_log" to handle some very urgent cases, using this command has a big risk, SQL Server pops up a warning message. In order to ensure the consistency of the database, you should use it as "last move."
The above two methods are just clearing the log, not log backup, if you want to back up the log, you should execute the "Dump Transaction Database_name to DumpDevice" command. Four, manage huge transactions
Some operations will modify data in large quantities, such as a large amount of data, delete all data (delete), insertion of data, which will make the log growth speed, full danger. The following author introduces you how to split big matters to avoid the fullness of the log.
For example, when the "Update Tab_a SET COL_A = 0" command is executed, if the table tab_a is large, this UPDATE action may cause the log to overflow before unfinished, causing a 1105 error (log full), and performing this big Exclusive Table Lock generated by the transaction, prevents other users from modifying this table during execution of the Update operation, which may cause a deadlock. To avoid these situations, we can divide this large transaction into several small transactions and perform "Dump Transaction" action.
In the case in the above example, it can be divided into two or more small transactions:
update tab_a set col_a = 0 where col_b> x godump transaction database_name with truncate_only go update tab_a set col_a = 0 where col_b <= x go dump transaction database_name with truncate_only Go
This way, a big business is divided into two smaller transactions.
The above method can be split in accordance with the needs of any offered. If this transaction needs to be backed up to the media, you don't have to use the "with truncate_only" option. If you execute the "Dump Transaction With Truncate_only" command, you should first execute "DUMP DATABASE". In this class, we can separate the table delete, the table is inserted into the corresponding split.