Ø
Foreword
The goal of performance adjustment is to minimize network communication, reduce disk I / O and CPU time, so that all users processed throughput to maximize, providing acceptable response time for each query. The implementation of this goal must be built thoroughly analyzed the application's requirements, and has a profound understanding of data logic and physical structure, and requires evaluation and coordination of performance gains for database competition.
Ø
Application system design
In the design of the application system, we must focus on the following points:
One. Reasonable use index
Indexes are important data structures in the database, and its fundamental purpose is to improve query efficiency. The use of indexes is just right, and the principles of use are as follows:
● Connect frequently, but do not specify the index on the column of the foreign key, and the field that is not often connected is automatically generated by the optimizer.
● Establish an index on the columns of frequent sorting or grouping (ie, GROUP BY or ORDER BY operation).
● Establish a search in columns that are often used in the conditional expression, do not establish an index on the columns of different values. For example, only two different values of "male" and "female" in the "sex" column of the employee table, so it will not be necessary to establish an index. If the establishment index does not improve query efficiency, it will seriously reduce the update speed.
● If there are multiple columns to be sorted, a composite index can be established on these columns.
example:
Table RECORD has 620000 lines, trying to look at different indexes, the following SQL operation:
1. Built a non-communic index on Date
Select count (*) from record where date>
'19991201' and Date <'19991214'and Amount> 2000 (25 seconds)
Select Date, Sum (Amount) from Record Group by Date (55 seconds)
Select count (*) from record where date> '19990901' and place in ('bj', 'sh') (27 seconds)
analysis:
There is a lot of repetition values on the Date. Under the non-communical index, the data is properly stored on the data page. When the range is found, a table scan must be executed to find all the rows within this range.
2. A cluster index on Date
SELECT Count (*) from Record Where Date> '19991201' And Date <'19991214' and Amount> 2000 (14 seconds)
Select Date, SUM (Amount) from Record Group by Date (28 seconds)
SELECT Count (*) from Record Where Date> '19990901' and Place in ('BJ', 'SH') (14 seconds)
analysis:
Under the cluster index, the data is physically in order on the data page, and the repetition value is also arranged together, and it can be found in the range of the range, and only scanned the data page within this range, avoid Large scanning, improved query speed.
3. Combine index on Place, Date, Amount
Select Count (*) from Record Where Date> '199991201' And Amount> 2000 (26 second) Select Date, SUM (Amount) from Record Group by Date (27 seconds)
SELECT Count (*) from Record Where Date> '19990901' and Place in ('BJ', 'SH') (<1 second)
analysis:
This is an unseasonful combination index because it is the leader, the first and second SQL do not quote Place, so there is no use of the index; the third SQL uses Place, and all columns referenced Contains in the combined index, the index coverage is formed, so its speed is very fast.
4. Combine index on Date, Place, Amount
SELECT Count (*) from Record Where Date> '19991201' And Date <'19991214' and Amount> 2000 (<1 second)
Select Date, Sum (Amount) from Record Group by Date (11 seconds)
SELECT Count (*) from Record Where Date> '19990901' and Place in ('BJ', 'SH') (<1 second)
analysis:
This is a reasonable combination index. It uses DATE as the leader, allowing each SQL to utilize indexes, and forms an index coverage in the first and third SQLs, and thus performance has achieved optimal.
5. Summary:
The index established by default is a non-clustered index, but sometimes it is not the best; reasonable index design is based on analysis and prediction of various queries. General:
1. There is a large number of repetition values, and often have a range query (Between,>, <,> =, <=) and the column that occurred by the Order By, can consider establish a cluster index;
2. Always access multiple columns simultaneously, and each column contains a repetition value to consider establish a combined index;
3. Combined indexes should try to make a key query form index coverage, and the front lead list must be the most frequent column.
two. Avoid or simplify sort
The large table should be simplified or avoided. When an output can be generated using an index to generate an output in an appropriate order, the optimizer avoids the step of sorting. Here are some influencing factors:
● The index does not include one or several columns to be sorted;
● The order of the columns in Group By or Order By clause is different from the order of the index;
● Sort columns come from different tables.
In order to avoid unnecessary sorting, it is necessary to correctly enhance indexes, reasonably consolidate database tables (although sometimes it may affect the standardization of the table, but is worthy of efficiency). If sort is inevitable, you should try to simplify it, such as the range of zodes of sorting.
three. Eliminate sequential access to large table line data
In nested queries, sequential access to the table may have a fatal impact on query efficiency. For example, use sequential access strategy, a nest 3 query, if each layer queries 1000 lines, then this query is to query 1 billion row data. Avoiding the main method of this is to index the column of the connection. For example, two tables: student table (student number, name, age ...) and selection class (student number, course number, grade). If both tables are connected, they must establish an index on the "Learning" connection field. It is also possible to use and set to avoid sequential access. Although there are indexes on all check columns, some form of WHERE clause is forced optimizer to use sequential access. The following query will force the order of operation of the ORDERS table:
Select * from Orders where (Customer_Num = 104 and ORDER_NUM> 1001) or ORDER_NUM = 1008
Although indexing is built in Customer_NUM and ORDER_NUM, the optimizer is used in the above statement or the sequential access path to scan the entire table. Because this statement is to retrieve a collection of separated rows, it should be changed to the following statement:
Select * from orderers where customer_num = 104 and order_num> 1001
Union Select * from Orders where order_num = 1008
This will use the index path to process the query.
four. Avoid related subsis
A column label occurs in the query in the inquiry and the WHERE clause, then it is likely that the subquery must be re-query after the column value in the main query changes. The more nesting, the lower the efficiency, so you should try to avoid subquery. If the child query is inevitable, then filter out as much row as possible in the child query.
Fives. Avoid difficult formal expressions
Matches and Like Keywords support wildcard matching, and the technical is called regular expressions. But this match is particularly time consuming. For example: SELECT * from Customer WHERE ZIPCODE LIKE "98_ _ _ _"
Even the index is established on the zipCode field, in this case, it is also possible to use sequential scanning. If the statement is changed to SELECT * from customer where zipcode> "98000", you will use the index to query when you execute the query, obviously greatly improves the speed.
In addition, it is necessary to avoid non-start substrings. For example, the statement: select * from customer where zipcode [2,3]> "80", the non-start substring is used in the WHERE clause, so this statement does not use an index.
six. Use temporary table to accelerate query
Sort a subset of the table and create a temporary table, sometimes accelerate the query. It helps to avoid multiple sorting operations and simplify the work of optimizer in other ways.
E.g:
Select Cust.Name, Rcvbles.balance, ... Other Column from Cust, Rcvbles
WHERE CUST.CUSTOMER_ID = RCVLBES.CUSTOMER_ID AND RCVBLLS.BALANCE> 0
And custompostcode> "98000" Order by Cust.Name
If this query is to be executed more than once, you can find all unpaid customers in a temporary file and sort by the customer's name:
Select custom.name, rcvbles.balance, ... other columnsfrom cast, rcvbles where custom.customer_id = rcvlbes.customer_ID
And rcvblls.balance> 0 ORDER BY CUST.NAME INTO TEMP CUST_WITH_BALANCE
Then query in the temporary table in the following manner:
Select * from custom_with_balance where postcode> "98000"
The row in the temporary table is less than the routine in the main table, and the physical order is the desired order, and the disk I / O is reduced, so the query workload can be greatly reduced.
Note: The primary table is not modified after the temporary table is created. When data is frequently modified in the primary table, be careful not to lose data.
Temporary Table - Temporary table in Tempdb causes a large number of I / O operations and disk access, and temporary tables consume a lot of resources.
Inline view - use in-in-room instead of temporary table. The embedded view is just a query that can be coupled to the FROM clause. If you only need to join the data to another query, you can try the inline view to save resources.
The established temporary table is used to use a local temporary table, not the global temporary table. After using the temporary table, please delete it in time. Avoid unnecessary memory redundancy.
Seven. Use sort to replace non-order access
Non-sequential disk access is the slowest operation, which is manifested back and forth in the disk access arm. The SQL statement hides this situation so that we can easily write a query to access a large number of non-sequential pages when writing applications. Sometimes, use the sort capability of the database to replace the sequential access to improve the query.
Eight. Insufficient connection conditions
LEFT JOIN consumes a lot of resources because they contain data that matches NULL data. In some cases, this is inevitable, but the consideration may be very high. Left Join consumes more resources than Inner Join, so if you can rewrite the query so that the query does not use any Left Join, it will get a very considerable return.
A technique accelerating using the Query speed of Left Join involves creating a Table data type, inserting all rows in the first table (left on the left side of the left), then update the Table data type using the value in the second table. This technology is a two-step process, but it saves a lot of time compared to standard Left Join. A good rule is to try a variety of different technologies and record the time required for each technology until you have the best performance for your application's execution performance.
Declare @TBLMONTHS TABLE (SMONTH VARCHAR (7))
Example: Table Card has 7896 lines, there is a non-aggregated index on Card_no, table Account has 19112 lines, there is a non-aggregated index on Account_NO, trying to see the execution of two SQL under different table connections:
SELECT SUM (A.Amount) from Account A, Card B Where A.CARD_NO = B.Card_no (20 seconds)
Change SQL to:
Select SUM (a.amount) from Account A, Card B Where A.Card_no = B.Card_no and a.account_no = B.account_NO (<1 second)
analysis:
Under the first connection conditions, the best query is the addition of Account as an external table, and the CARD is inside the table, and the number of I / O can be estimated as: outer layer table Account 22541. Page (Outer Table Account 191122 Row * Inland Table Card) 3 pages to be found on the first line of outer tables) = 595907 times I / O
Under the second connection conditions, the best query is the CARD as an external table, an Account as an inner table, and uses an index on the Account. The number of I / O can be estimated by the following formula:
Page 1944 (Outer Table Card 7896) on the outer table Card * Inside table Account, the 4 pages of each line to find each other in each table) = 33528 times I / O
It can be seen that only the fully connected conditions, the true best solution will be executed.
summary:
1. Multi-table operations list a few groups of possible connection scenarios based on the connection conditions and to identify the minimum system overhead based on the connection conditions. The connection condition should be considering the table with indexes, the number of rows of rows; the selection of the inner and outer tables can be determined by the formula: the number of matches in the outer table * The number of times in the inner layer table is determined, the minimum is the best Program.
2. View the method of the execution plan - Use Set Showplanon to open the showPlan option, you can see the connection order, use the index information; if you want to see more detailed information, you need to perform DBCC (3604, 310, 302) with the SA role.
nine. Stored procedure
We usually send SQL scripts sent to SQL2000 each time, we need to perform after compiling through the server. The stored procedure can do not need to compile, so the speed can be faster!
The principle of establishing a stored procedure:
The stored procedure is recommended in frequent SQL statements.
Stored procedures Try to use SQL to bring back parameters, rather than custom return parameters.
Reduce unnecessary parameters to avoid data redundancy.
ten. cursor
The only way you want to traverse a recordset is to use the system cursor, you also pay attention to, in time after using the completion, close and destroy the cursor object to release the resources he use. And don't use the cursor at all, because he will take up more system resources, especially for large concurrency, it is easy to deplete system resources.
eleven. Transaction processing
In many cases, we will encounter the case where multiple tables needed simultaneously during the stored procedure, which needs to avoid the data of data caused during operation due to unexpected data. At this time, it is necessary to put the operation of the plurality of tables into the transaction. However, it should be noted that the RETURN statement cannot be used to exit in the transaction, which will cause a transaction's abnormal error, and cannot guarantee the consistency of the data. Further, once multiple processing is placed in transactions, the processing speed of the system will decrease, so the frequently operated plurality of segmentable processing processes should be placed in multiple stored procedures, which greatly improves the system response. The speed, but the premise is not to violate the consistency of the data.
Ø
Hardware policy
The quality of operating system performance directly affects the performance of the database, if there is a problem, such as CPU overload, over-memory switching, disk I / O bottleneck, etc. In this case, the internal performance adjustment of the database is not improved. System performance. We can monitor a variety of devices through the System Monitor of Windows NT and discover the performance bottleneck. Key attention to the following points:
One. CPU
A common performance problem with CPU is lack of processing capabilities. The processing capability of the system is determined by the number of CPUs, types, and speeds of the system. If the system does not have enough CPU processing capabilities, it cannot be able to process affairs quickly to meet the needs. We can use the System Monitor to determine the usage rate of the CPU. If you run for a long time at a rate of 75% or higher, you may have encountered the CPU bottleneck problem. At this time, the CPU should be upgraded. But before the upgrade must monitor the other features of the system, if the SQL statement is very efficient, the optimized statement helps solve lower CPU utilization. And when it is determined to be stronger processing power, you can add a CPU or replace with a faster CPU. two. RAM
The amount of memory available for SQL Server is one of the most critical factors in SQL Server performance. The relationship between memory with the I / O subsystem is also a very important factor. For example, in a system of frequent I / O operation, SQL Server is used to cache data, the more memory, the less physical I / O, which must be performed. This is because the data will be read from the data cache instead of being read from disk. Similarly, the lack of memory will cause significant disk read and write bottlenecks because there is insufficient system cache ability to cause more physical disk I / O. You can use the System Monitor to check SQL Server's Buffer Cache Hit Ratio counter. If the hit rate is often less than 90%, more memory should be added.
three. I / O subsystem
The bottleneck problem that occurs in the I / O subsystem is the most common problem that the database system may encounter. A very poor I / O subsystem causes the severity of performance issues second only to prepare a poor SQL statement. The I / O subsystem problem is produced, and the I / O operation of a disk drive can perform is limited. Generally, one common disk drive can only process 85 times I / O operation per second, if the disk drive is overloaded, to these The I / O operation of the disk drive is queued, and the I / O delay of SQL will be very long. This may make the locking time longer, or the thread remains idle in the process of waiting for the resource, and the result is the performance of the entire system. Solving issues related to I / O subsystems may be the easiest, in most cases, adding a disk drive to resolve this performance issue.
Ø
System parameter setting
One. Memory management
l Server memory option
Use two server memory options MIN Server Memory and Max Server Memory to reconfigure the amount of memory used in the SQL Server instance in the buffer pool (unit).
By default, SQL Server can use system resource to dynamically change its memory requirements. The default setting of MIN Server Memory is 0, and the default setting of Max Server Memory is 2147483647. The minimum amount of memory specified for Max Server Memory is 4 MB.
When SQL Server is dynamically used memory, it requires the system to periodically detect the number of physical memory available. SQL Server increases or contracts the cache according to server activity to keep the available physical memory between 4 MB to 10 MB. This avoids Windows 2000 switching. If there is less free memory, SQL Server releases memory to Windows 2000, which usually continues to use the available list. If there is more available memory, SQL 200 is submitted to the cache memory. SQL Server only increases the memory of the cache memory only when its workload needs more memory; the server in the sleep state does not increase its cache.
Allow SQL Server Dynamic Using Memory is a recommended configuration; however, memory options can be manually set and the ability of SQL Server is dynamically used to dynamically use memory. Before setting the amount of memory used by SQL Server, appropriate memory settings should be determined from all physical memory to any other instances required for Windows 2000 and any other instance of SQL Server (memory used by other systems, if the computer is not For SQL Server). This is the maximum amount of memory that can be assigned to SQL Server. l Handmade memory options
Handmade SQL Server memory options have two main methods:
The first method, setting MIN Server Memory and Max Server Memory as the same value. This value corresponds to the fixed memory assigned to SQL Server.
The second method is set to a range of MIN Server Memory and Max Server Memory. This method is useful in the system or database administrator to configure the SQL Server instance to adapt to the memory requirements of other applications running on the same computer.
MIN Server Memory guarantees the minimum amount of memory used by the SQL Server instance. The amount of memory specified in MIN Server Memory is not immediately assigned when SQL Server starts. However, when memory uses the value of the client, SQL Server will not release the memory from the assigned buffer pool unless the min server memory value is reduced.
Max Server Memory prevents SQL Server from using more than specified number of memory so that the remaining available memory quickly runs other applications. The memory specified in Max Server Memory is not immediately assigned when SQL Server starts. Memory uses the needs of SQL Server growth until the value specified in the Max Server Memory is reached. SQL Server cannot exceed this memory usage, unless the Max Server Memory value is added.
There will be a shorter time delay between the application startup and SQL Server release memory, using Max Server Memory to avoid this delay, so that the performance of other applications can be improved. The MIN Server Memory is set when the new application shared with SQL Server is displayed when there is a problem when starting. It is best to let SQL Server use all available memory.
If you are manually set up a memory option, be sure to properly set the server used to copy. If the server is a remote distributor or a publisher / distributor's combination, you must assign at least 16 MB of memory.
Ideally, the memory to SQL Server should be allocated as much as possible without causing the system exchange page to the disk. This value varies greatly due to the system. For example, in a 32 MB system, allocating 16 MB may be appropriate; in a 64 MB system, it is possible to allocate 48 MB.
The specified amount of memory must meet the needs of SQL Server's static memory (core overhead, open objects, locks, etc.) and data cache (also known as cache).
If necessary, use the statistics to help adjust the memory value in the system monitor (in Windows NT 4.0 for Performance Monitor). These values should only be changed only when you add or reduce memory or change the system.
l Virtual Memory Manager
Windows 2000 provides a 4 GB virtual address space at any time, where low 2 GB address space is dedicated to each process and can be used by the application. Higher 2 GB address is reserved by the system. The Windows NT Server Enterprise Edition provides 4 GB virtual address space for each Microsoft Win32® application, where the lower 3 GB address space is per process and can be used by the application. Higher 1 GB address is reserved by the system. The address space of 4-GB is mapped by the Windows NT V Virtual Memory Manager (VMM) to the available physical memory space. Depending on the support of the hardware platform, the available physical memory can be up to 4 GB.
Win32 applications (such as SQL Server) can only identify virtual (or "logical) addresses, not physical addresses. At a given time, an application uses how much physical memory is determined by the available physical memory and VMM. The application cannot directly control physical memory.
A virtual address system like Windows 2000 allows excessive submission of physical memory, which makes virtual memory and physical memory ratio greater than 1: 1. Therefore, larger programs can run on a computer having different physical memory configurations. However, the application is much more virtual memory than the combined average work set may result in poor performance.
SQL Server can lock memory as a work set. Because the memory is locked, there may be an incorrect memory when running other applications. If there is an incorrect error, it may be that there is too much memory allocated to SQL Server. Set working set size option (settings via sp_configure or sql server manager) allows the lock memory to fail. By default, the SET WORKING SET SIZE option is disabled.
Virtual memory that manually configured to SQL Server than physical memory will result in lower performance. Moreover, the memory requirements of the Windows 2000 operating system must be considered (approximately 12 MB, which is slightly different due to the cost of the application). When the configuration parameters of SQL Server are raised, the system's overhead may also grow, because Windows 2000 requires more resident memory to support additional threads, page tables, etc. Allow SQL Server TO to dynamically use memory to avoid memory-related performance issues.
MIN Server Memory and Max Server Memory are advanced options. If you want to use the sp_configure system stored procedure to change this option, you must set the show advanced options to 1, which takes effect immediately (no need to stop and restart the server).
two. Thread activity
In Windows 2000, the activity (thread) in progress can be migrated between the processor, and each migration refreshes the processor cache. In the case of heavy system load, specifying a processor runs a specific thread to improve system performance, and the method is to reduce the number of processor cache reloaded. The association between the processor and the thread is called the processor affinity.
Using the Affinity Mask option to improve the performance of the symmetric multiprocessor (SMP) system (more than 4 processors) when the system load is overwhelmed. The thread can be associated with a particular processor and specify the processor to use by MISQL. It is also possible to run the activity of SQL Server in a processor outside the processor that has been assigned a specific workload by the Windows 2000 operating system.
three. cursor
Use the Cursor Threshold option to specify the number of rows of the cursor set, and the cursor keys are generated asynchronously in the cursor. If Cursor Threshold is set to -1, all cursor keys will be generated synchronously, which is useful for small tour gathering. If Cursor Threshold is set to 0, all cursor keys are generated. If CURSOR Threshold is other value, the query optimizer will compare the number of rows and cursor thresholds in the cursor set. If the former is greater than the latter, the cursor keys are generated. Don't set the Cursor Threshold value too much because the small result set creates a synchronous mode. When the cursor is generated into a key set for a result, the query optimizer estimates the number of rows returned for the result set. If the query optimizer estimates that the number of rows is greater than this threshold, the cursor is generated, and the user can extract the line while the cursor continues to fill. Otherwise, the cursor is generated, and the query will wait for all rows to return.
The query optimizer estimated key centralized line accuracy depends on the current value of each table statistics in the cursor.
four. lock
Use the LOCKS option to set the maximum number of available locks to restrict SQL Server to be used to lock the amount of memory. The default is set to 0, that is, the SQL Server is allowed to dynamically allocate and collect the lock according to the system.
When the server is set to 0 in LOCKS, the lock manager assigns 2% of SQL Server available memory to the lock structure Initial pool When the lock pool is exhausted, the system allocates additional locks, and the memory allocation of the dynamic lock pool cannot be more than SQL. 40% of the memory assigned to the memory.
Typically, if the memory required by the lock is less than the current memory, and more server memory is available (the Max Server Memory upper limit is not reached), SQL Server dynamically assigns memory to meet the needs of the memory. However, if the memory allocation causes the operating system level to change the page (for example, if another application runs on the same computer on the same computer), more lock space will be allocated.
It is recommended to make SQL Server dynamically use the lock. However, Locks can be set to overwrite the ability of SQL Server dynamically assigning lock resources. If SQL Server displays messages that have not been available, increase the value of this option. Since each lock needs to consume memory (96 bytes of each lock, add this value will increase the entire server to memory.
Locks is an advanced option. If you want to change this setting with a sp_configure system stored, you must set the show advanced option to 1, which will take effect after stopping and restarting the server.
Fives. index
Using the Fill Factor option specifies that when you create a new index with existing data, Microsoft® SQL ServerTM should make each page fill the extent. Since SQL Server must spend time dividing these pages while filling, Fill Factor percentage affects system performance.
Fill Factor percentage is only used when creating an index. These pages cannot be maintained at any particular level of full level.
The default value of the Fill Factor is 0; its valid value is from 0 to 100. The value of the Fill Factor is 0 does not indicate that the page is filld as 0%. Similar to the Fill Factor is set to 100, SQL Server is 0 when the Fill Factor value is 0, and all the pages all use the page to create a gathering index, and all the pages of all the pages are creating a non-aggregated index. Unlike the Fill Factor to 100, SQL Server reservespace at the high level level of the index tree. There are few reason to change the default value of the Fill Factor because you can use the CREATE INDEX command to overwrite it. Smaller Fill Factor values will cause SQL Server to create new indexes in unpleasant pages. For example, setting the Fill Factor value to 10 for the creating an index on a table that will eventually maintain less data. The smaller the Fill Factor value will cause each index to accumulate more storage space, but it also allows the page splitting insertion operation.
If the Fill Factor value is set to 100, SQL Server creates a gathering and non-aggregation index at 100% fullness. Setting the Fill Factor is 100 only for the read only table because the data is never added to this class.
Fill Factor is an advanced option. If you want to change this setting with a sp_configure system stored, you must change the Fill Factor when Show Advanced Options is set to 1, which will take effect after stopping and restarting the server.
six. data query
When there is not enough memory to run queries, a large amount of memory is taken up (such as those involving sorting and hash operations) will wait. The query will time out after a period of time, or by the SQL Server calculation (estimated query time 25 times), or the non-negative numerical value set by the query wait option.
The query wait option can set a query to wait for the time to wait for the required resources (in seconds, range from 0 to 2147483647). If the default value -1 or specified -1 is used, the timeout time is calculated by calculation, it is 25 times that of the expected query cost.
The Query Governor Cost Limit option is used to specify the highest time limit that the query can run. Query cost refers to the estimated time (in seconds) of the query consumed in a specific hardware configuration.
If you specify a non-zero, non-negative value for this option, the query controller will not allow the estimated cost to exceed the query. Specified as 0 (default), the query controller will be closed. In this case, all queries are allowed to run.
Seven. Max throughput
The SQL installer automatically configures Windows 2000 to maximize the throughput of the network application. This allows the server to accept more connections. Although the network application is recommended to make the network application, you can change this setting.
If a full text retrieval function is installed, you must set up Windows 2000 configuration to make the network application throughput and cannot be changed.
Eight. Configure server task scheduling
If you intend to connect to SQL Server from your local client (client running on the same computer), you can improve the processing time by setting the server with the same priority to run the front desk and background applications. As an application running in the background, SQL Server can have the same priority as other applications running in the front desk.
Ø
Optimize backup and reducing performance
SQL Server provides several ways to improve the speed of backup and restore operations:
l Use multiple backup devices to make the backup parallel to all devices. Again, the backup can be restored from multiple devices. The speed of the backup device is a potential bottleneck for backup throughput. Use multiple devices to improve throughput as a proportion of devices used. l Use database backups, difference database backups, and transaction log backups to minimize the time required to recover. Difference database backups can reduce transaction logs that must be applied to recover database operations. This method is usually faster than the creation full database backup.
Ø
to sum up
In fact, the performance optimization of SQL is a complex process. These are only an embodiment of the application level, and in-depth studies will also involve the resource allocation of the database layer, the flow control of the network layer and the overall design of the operating system layer. Therefore, it is difficult to find a general optimization scheme, which can only be adjusted for the specific situation of the operation in the process of system development and maintenance.
After reading these skills, I believe that some help you will or less, and I hope to pass some of the above experience. Summary, you can make you consciously avoid some detours when using SQL Server.