Reasonable use of aggregation index, can improve your query speed, when I took over the company's repair network last year, I found that the repair network's query statistics can only count the one-week data. When the connection is timeout, the connection is timeout, and SQLCommand After the connection time is set to 10 minutes, the data can be found, but a month's data takes about 1 minute to 2 minutes, and the time inquiry is more difficult to endure, after communication with users, users After the query, the repair date will be selected as the query condition. After I check the database, I found that the connection of 10 tables returned to a result set, the data main table of the data is about 70W, the FID (main key is non-self-increasing) and the date of maintenance (DATETIME) is the index of the primary table. I first deaten the repair date index, and then build a FINDEXDATE flaw in this primary table, the type is VARCHAR type, used to save the exact to day, in order to reduce the risk, I don't modify the program but directly in the table Caught a for Update, for insert trigger, save the annual monthly month of the repair date to this field, then set the field to the aggregation index, then the query speed has increased a lot, usually 1W of 1W Record, it takes more than 1 minute to use. After the establishment is within 5 seconds, most of 2,3 seconds can be obtained, 36627 record query time is 8 seconds. But there is a shortcoming that if you use Microsoft ASP.NET with DataGrid page, the speed will be very slow, you need to write it yourself. But because users are found out and exported to Excel analysis, they don't need to consider this problem. There is also a problem that every time, when the data has grown a lot, it is possible to rebuild the index, and when the test is established, the query speed has no effect on the ascending order, whether ordered the query speed.