Turning 1 million data - only dozens of milliseconds

xiaoxiao2021-04-04  270

System resource occupation

Memory - Ideal. SQL occupied memory has no more than 65m, generally around 35m; ASP.NET occupies the maximum memory, which is not more than 40m, usually about 25m.

CPU: 8% or so, because there is not much accessed, it is not concentrated, so this value does not explain. I have done the next page for N times, I found that the CPU has floating, reaching about 50%.

But for 1 million records, AMD XP2000 CPU tens of milliseconds of the screening speed, because it is acceptable, even ideal.

After all, the CPU of the server is much faster than me, and the record is also difficult to reach 1 million.

The result is still very satisfied, but the beauty is inadequate, I want to see the effects of massive visit.

I hope everyone will support it, more points, thank you. Ha ha

. After the N page is slow, it takes about 500 milliseconds.

Let's discuss the page

Tips.

I don't use the cursor, temporary table, not in, in these methods, not to say that their efficiency is not high, but I haven't tested it yet. I only used TOP and checked two tables.

Everyone can also provide some other ways, I will test it, see the effects of 1 million cases. (Please don't give a string in the stored procedure, seeing it is too strong)

The premise of discussion is to be at least 100,000 in the case of massive data. If it is very little data, then how it can be changed. Also worse. 1. Set a reasonable index first to do the setup reasonable index, this seems to be ignored, at least rarely being talked. Note: The primary key is one of the index, and it is the fastest. If you all treat the primary key as a sort field, then you have used the index. If you don't set a reasonable index, it will cause a very slow query, or even timeout. In this regard, you can do an experiment: find a table, fill in 10,000 records, if there is ID, addeddate and other fields, execute the SELECT TOP 10 * from Table inside the query analyzer should appear immediately. Then execute the SELECT TOP 10 * from Table ORDER BY ID (at this time the ID field is the primary key) is also the result. Then execute the SELECT TOP 10 * from Table ORDER BY AddDedDate (this time the addedDate field does not index) you will find that the speed is very slow. Now add a non-aggregated index to AddedDate, then execute the above query statement, speed has become very fast. It can be seen that the index magic effect! This is the most basic settings that flip millions of grades records. Specifically to the turn page of my forum, I set up BOARDID, the replydate two fields as a joint index. Because it is to discuss the group of Li and turn the page, and it is sorted by replydate. 2. Return to the required record for massive data, both read it, do not imaginable (if the record is less, it depends on utilization, usually very waste). So, if a page is displayed 20 words, you only read 20, so you will save memory and time. Note: Although this method has this method in ADO.NET, SqlDataAdapter.Fill (DataSet1, StartRecord, maxRecords, srctable); But he still wants to take all records of the query statement from SQL, and then take the specified number of records . This is the same for SQL, which will remain very slow for massive data. The home page in the forum is SELECT TOP 20 * from table where boardid = 5 Order by replydate desc This is only 20 records, plus the indexed credit, and the speed is very fast.

3. Try to minimize the length of the field can build a lot of fields, but the total length of the field cannot exceed 8060B, that is, if you built a char (8060) field, you can't build other fields. I am in the first test (Sunday), put all the information of the theme in a table, including a field of NVARCHAR (3600) theme content, and discover very slow when copying records, when reaching When 90,000, it was already very slow, barely copied records to 350,000, added an index, tested it, turn the page speed or oh, the front N is very fast, then n pages are very slow If you add a query, it is very slow. Looking at the data file was shocked - he actually took a 1.4G hard disk space, no wonder the copy and query were slow to die. So I modified a table structure and kicked out the field of the NVARCHAR (3600) theme content and put it in a separate table. Then re-copy the record is very fast, soon, the number of records from 16 will become 1048577. Yesterday's test is carried out under this condition. 4. Tips finally reached the page of the page, huh, huh, no waiting. Thinking is to find a sign first, then record the first N of the flag greater than (or less than) this flag. what? Didn't understand. It doesn't matter, I will give an example. Suppose is in-step order, each page is displayed 10 records, there are 100 records, the record number is 1 to 100 (why so that it is convenient for explaining), the record is 100 to 91, The second page record is 90 to 81, the record is 80 to 71 ... I have to turn it on the third page now, then find the value of the ID of the record of the 21 line (that is, 80) , Then take it out of TOP 10 with TOP 10 less than or equal to 80. Query statement declare @PageSize Int - Return to a page Document Declare @Curpage Int - Page Number (Part 2) 1: First; 2: Page II; ...; - 1 Last page.

declare @Count intdeclare @id intset @ pageSize = 10set @CurPage = 1if @CurPage = -1begin-- last one set rowcount @pageSizeselect @ id = ID from table order by ID end-- positioned if @CurPage> 0beginset @Count = @pageSize * (@CurPage -1) 1set rowcount @Countselect @ id = ID from table order by ID descend-- returned record set rowcount @pageSizeselect * from table where ID <= @ id order by ID descset rowcount 0 wherein "targeting "Use Select @ id = id from Table ORDER BY ID DESC this method, it feels a very amount of memory, because only one ID is logged, then use SELECT * from table where id <= @ ID ORDER BY ID DESC The final record set rowcount @PageSize is equivalent to top @pageSize. Advantages: No matter which page is turned to, the memory occupation is unchanged, and the multi-person access to the memory will not change. Many people, there is no test :) Disadvantages: single table, single row field.

http://community.9cbs.net/expert/topicview3.asp?id=4182510 sent this post, there are a lot of people replying, thank you for your support. I have to explain that I have to explain it here. I don't write an algorithm in the post, but I have said a lot of places to be paid to the massive data, such as establishing a reasonable index, only returning the required records, try to minimize the length of the field, etc. Notice or not pay attention Location. The last thing to say is the algorithm, which may be that my expression is too bad, and the example gives you a misunderstanding. Turn the statement (@PageSize * (@curpage -1) 1) - positioning declare @id Int select top 41 @ id = id from table order by id desc - display data SELECT TOP 20 * from table where id < @ ID ORDER BY ID DESC A page is displayed in an idked sequence (that is, in accordance with the INT type field) one page shows 20 records, which is the statement @PageSize * (@curpage -1) 1 = 20 * (3-1) 1 = 41 is precise because the id is discontinuous, so you need to use the first statement to locate, if it is continuous, what is the first statement? Examples of each small amount of data: There are 10 records, IDs are: 1000, 500, 320, 205, 115, 110, 95, 68, 4, 1. This page does not write a continuous misunderstanding, one page shows two records, now you want to display the third page, then the third page ID is 115, 110 first, first statement, SELECT TOP 5 @ id = id from table ORDER By ID DESC I don't know if you don't understand this. At this time, the result obtained by Print @ID is 115. Take another second statement Select Top 2 * from table where id <= 115 ORDER BY ID DESC This time the recordset is 115, 110, which is the record we need. Note: No continuous ID, no limit to the ID, you can replace the replydate field, of course, the id @ID INT to change into the ID is the primary key, unique identifier The field recorded, it itself is an index and is the highest efficiency index. A. How can the value of the field of the unique identification record can be changed casually, is it not messing? B. The primary key is the fastest index, maybe you haven't realized (I don't know if I started, I learned SQL for a long time). If your algorithm uses it as a sort field, then speed will be very fast, will It is much better than sorting more than other fields (fields without index). C. Sort by replydate, then you must establish an index (in the case of massive data), otherwise it will time out. D. After establishing an index, execute add, modify, and delete the database to bring catastrophic torture? ? I think so at first, but in order to flip, I have to add. But the following facts have failed to remove my concerns first.

How did the 1 million records come out? Everyone can see that there are a lot of titles in the post, and it is copied. I first added 16 records, then add indexes. Note that the index has been established before INSERT INTO! Next is the number of rows affecting the INSERT INTO TABLE (...) SELECT ... from Table: 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072, 262144, 524288 The record has been completed by 100. The last time is only one or two minutes (the specific time forgot is, anyway is very fast). At the same time, the Forum also provides the feature of posting, just when the record is added, the last reply time of records in 2006 is added, so your post will not be displayed on the first page. But you can see that the execution time is very fast. It is not a problem when it is added, and the index is inverted, so the number of rows affects is absolutely not as much as you think. Let's see the modification. After reading the SP1234 reply, add the modified function, just to test, so you can modify the title, finally publish the time, group ID. Why can you modify these fields? The title is a normal field, and finally the time and group ID are index fields. Modifying the time required for these fields is very fast, there is [change] [deleted] word on the right side of the last reply time, you can try it. Similarly, when modified, the number of rows affecting is not a lot. Finally, I haven't said that I have said, the forum provides this feature, try it. In addition, when deleting, don't you re-establish an index? Let's use the scope. First of all, this is just a method, not a universal stored procedure, that is, to make appropriate modifications according to the situation. Best use environment: single table, single row segment, can be used with indexes. Note: Sort fields do not have to be continuous, it is best to use the fields of int, DateTime types, and the string type field is not tried, the effect may be slightly. The table may have no primary key, but for massive data, a reasonable index must be established. There is a more fatal restriction, you don't seem to find that it is the repetition of the sort field. It is best not to repeat, but it is absolutely unable to have a repetition record. If you don't matter, just don't take a cross page, cross page Just squeezing a number of records, sorted with a time field, and the possibility of repeated records is small. Scalability: Bingbingcha (no thinking, not Meng E, original is a big gray wolf) is very exciting ----------------- Such skills in the SQL area Discussed .. The speed is very fast .. But not satisfaction .. Practicality is too bad .. Now the company needs to use Most of the paging. Many table queries .. Single table paging does not need to meet the needs This stored procedure can be expanded .. Use a temporary table landlord method .. is a good choice ..----------------- For multi-table related queries, there are two kinds Method, the first is BINGBINGCHA - "Method for using temporary table landlord", this is the only feasible method when mass data. But when small data is measured, there is a little cumbersome, and it is not easy to summarize.

转载请注明原文地址:https://www.9cbs.com/read-131775.html

New Post(0)