Some considerations about SQL Server

xiaoxiao2021-03-06  44

If you are responsible for a SQL Server-based project, or you have just contacted SQL Server, you may have to face some database performance, this article will provide you with some useful guidance (most of them can also be used for others DBMS).

Here, I don't plan to introduce the trick to use SQL Server, and I can't provide a plan to cure all diseases. What I do is to summarize some experience - there is a good design. These experience comes from me The lesson of the year is coming, I have been coming, I have seen many of the same design mistakes to be repeated again.

Do you know your tool?

Don't despise this, this is the most critical one I tell in this article. Maybe you also see that there are many SQL Server programmers do not have the useful tools that all T-SQL commands and SQL Server are available.

"What? I have to waste a month to learn the SQL command I will never use?", You may say this. Yes, you don't need this. But you should use it on a weekend. T-SQL command. Here, your task is to understand, in the future, when you design a query, you will remember: "Right, there is a command to completely implement the function I need", so get to MSDN The exact syntax of this command.

Don't use a cursor

Let me repeat again: don't use a cursor. If you want to destroy the performance of the entire system, they are your most effective preferred ways. Most beginners use cursors, but they don't realize their impact on performance. Take up memory, and use them to lock the table, in addition, they are simply like snails. And the worst, they can make every performance optimization that your DBA can do. I don't know if you know every execution. One fetch is equal to executing a select command? This means if your cursor has a record of 100,000 records, it will execute 1000 Select! If you use a set of SELECT, UPDATE or DELETE to complete the corresponding work, it will be more efficient .

Beginners generally believe that the use of cursors is a more familiar and comfortable programming method, which can be very unfortunate, which will lead to bad performance. Obviously, SQL's overall purpose is what you want to achieve, not how to achieve.

I used T-SQL to renovate a cursor-based stored procedure. The table is only 100,000 records. The original stored procedure has been implemented for 40 minutes, and the new stored procedure is only 10 seconds. Here, I think you should see something that is not a competent programmer.

We can write a small program to acquire and process the data and update the database, which is sometimes more effective. Remember: For cycles, T-SQL is powerless.

I will re-remind: Use the cursor without the benefits. In addition to the work of DBA, I have never seen any work that uses the cursor to effectively complete any work.

Standardize your data sheet

Why not normalize the database? There are about two excuses: for performance considerations and purely because of lazy. As for the second point, you have to pay for this time later. And on performance, you don't need to optimize it is not slow. I often see some programmers "reverse standardization" database, their reason is "The original design is too slow", but the result is often what they make the system slower. DBMS is designed to handle the specification database, so Remember: Design the database according to the standardized requirements.

Don't use SELECT *

This is not easy to do, I know much, because I often do this. However, if you specify the columns you need in Select, then the following benefits:

1 Reduce internal deposits and network bandwidth

2 You can get a safer design

3 Read all the required columns from the index optimizer

Understand what you want to do

Create a robust index for your database, it is a merit. You can do this is an art. Whenever you add an index for a table, SELECT will be faster, but INSERT and DELETE are great. Slow, because there are many additional jobs that create maintenance indexes. Obviously, the key to the problem is: What kind of operation is you going to do with this table. This problem is not very good, especially when it involves delete and Update. Because these statements often contain the select command in the WHERE section.

Don't create an index for the "Gender" column

First, we must understand how the index accelerates access to the table. You can understand the index as a way to divide the table based on a certain standard. If you create an index to the column similar to "gender" You just divide the table to two parts: male and female. You are dealing with a table with 1,000,000 records, what is the meaning? Remember: Maintenance index is time-consuming. When you design an index, please Following this rule: According to the number of different contents, the number of different content may be included, such as: Name province gender. Use transactions

Please use a transaction, especially when the query is time consuming. If there is a problem in the system, this will save you a life. General experienced programmers have experience ----- You often encounter some unpredictable situations It will cause the stored procedure to crash.

Be careful

Visit your table according to certain order. If you lock Store A, lock Statue B, then you have to lock them in this order during all stored procedures. If you (inadvertent) a stored procedure First lock the table B, lock the table a, which may cause a deadlock. If the lock order is not well design well, the deadlock is not easy to discover.

Don't open big data sets

In the 9CBS technology forum :), a question that is often raised is: How can I quickly add 100,000 records to ComboBox? This is wrong, you can't do this. Very simple, your users want Browse 100,000 records to find the required records, he will definitely curse you. Here, what you need is a better UI, you need to display no more than 100 or 200 records for your users.

Do not use server-side cursors

Compared with the server-side cursor, the client game can reduce the system overhead of the server and the network, and also reduce the lock time.

Use parameter query

Sometimes, I saw a problem like this in the 9CBS technology forum: "SELECT * from a where a.id = 'a'b, because the single query has an exception, what should I do?", And the popular answer is: use Two single quotes instead of single quotes. This is wrong. This is not a rule, because you will also encounter such problems in other characters, not to mention the serious bug, in addition to this, this will Make SQL Server's buffer systems that cannot function. Use parameter queries, taking the bottom of the bottom, and these issues are not existed.

Use large data volume when program encoding

Programmers used in the development of test databases general data volume is not large, often the end user's data volume is very large. We usually do nothing, the reason is very simple: now the hard drive is not very expensive, can why performance The problem is to be noticed when it is already irreparable?

Do not use INSERT to import a large number of data

Don't do this, unless it is necessary. Use UTS or BCP so you can both flexibility and speed.

Pay attention to timeout problems

When querying the database, the default of the general database is relatively small, such as 15 seconds or 30 seconds. Some query runtime is longer than this length, especially when the data volume of the database is constantly growing.

Don't ignore the issue of simultaneously modifying the same record

Sometimes, two users will modify the same record at the same time, so that the latter modifier has modified the previous modifier operation, some updates will be lost. Handling this situation is not difficult: create a TimeStamp field, write in writing Check it before, if allow, merge and modify, if there is a conflict, prompt the user.

When inserting a record in the detail table, don't perform SELECT MAX (ID) in the home table

This is a universal error that will cause errors when two users are inserted in the same time. You can use scope_identity, ident_current and @@ Identity. If possible, do not use @@ Identity, because there is a trigger It will cause some problems (see discussion here).

Avoiding will be set to NULLABLE

If possible, you should avoid it to NULLABLE. The system assigns an additional byte for each row of Nullable columns, which will bring more system overhead when queries. In addition, it will be set to nullable make the encoding becomes Complex because each access these columns must be checked.

I don't say nulls is the root of trouble, although some people think so. I think "empty data" is allowed to be "empty data" in your business rules, then it is sometimes a good role in nullable, but if it is similar The following cases use nullable, which is simply self-discipline. Customername1

CustomerAddress1

CustomeMail1

Customername2

CustomerAddress2

CustomeMail3

Customername1

CustomerAddress2

CustomeMail3

If this happens, you need to standardize your table.

Try not to use the TEXT data type

Unless you use text to process a big data, don't use it. Because it is not easy to query, the speed is slow, it will waste a lot of space. Generally, VARCHAR can better deal with your data.

Try not to use a temporary table

Try not to use a temporary table unless you have to do this. General usually usage can replace a temporary table. Use a temporary table to bring system overhead, if you use COM to program, it will bring you a lot of trouble, Because COM uses database connection pools, temporary tables exist from start to end. SQL Server provides some alternatives, such as Table data types.

Learn to analyze inquiry

SQL Server query analyzer is your good partner, how you can understand how the query and index affect performance.

Use reference integrity

Define primary, uniqueness constraints, and foreign keys, which can save a lot of time. (End)

转载请注明原文地址:https://www.9cbs.com/read-112620.html

New Post(0)