Recently ended the project. I have organized project documentation to see the markup made by performance optimization. Publish your own views for the purpose of discussing with you. Welcome everyone to discuss this, if there is a mistake, please refer to it. (This article does not involve the application and research of deep mechanisms such as Cahe, just some skin applications and suggestions)
About data processing related optimization 1, SqlDataRead and DataSet selection
SqlDataRead Advantages: Reading data is very fast. If the returned data does not need to do a large amount of processing, it is recommended to use SqlDataReader, which is much better than Datset. Disadvantages: until the data reads before the data can be dropped in the database (SqlDataReader read data is fast. SqlDataReader class provides a way to read only data streams from the SQL Server database. It uses SQL Server Native network data transmission format The data is directly read directly from the database. DataReader needs timely explicitclose. The timely release of the data is connected.)
DataSet is read by reading data and caches in memory. Disadvantages: Highering the memory has been taken. If you need to do a large amount of processing returned, you can reduce the connection operation of the database. Advantage: Just connect to the connection of the database in the database only, read a large amount of data, do not do a large amount of processing for returning data. SQLDATAREADER. DATSET is appropriate. Selection of SqlDataReader and DataSet The implementation of the program function.
Second, ExecutenonQuery and ExecuteScalar
Update to data does not need to return a result set, it is recommended to use ExecutenonQuery. Since the result set can save network data transmission. It only returns the number of rows affected. If you only need to update the data with ExecuteNonQuery performance, it is relatively small.
ExecuteScalar it only returns the first column of the first line of the result. Use the ExecuteScalar method to retrieve a single value (for example, ID number) from the database. This is less code that this operation needs to be generated compared to the operation required to generate a single value using the ExecuteReader method.
* Simply update the data with ExecutenonQuery. Single value query uses ExecuteScalar
Data Binding Selection 3, Binding DataBinder
General Binding Method <% # DataBinder.eval (Container.DataItem, "Field Name")%> Binds with DataBinder.eval Binding No need to care about data source (DataRead or Dataset). You don't have to care about the data of the data to convert this data object to a string. A lot of work is made in the underlying binding, using reflection performance. It is because it is convenient, but it affects data performance. Come and see <% # DataBinder.eval (Container.DataItem, "Field Name")%>. When DataSet is bound, DataItem is actually a DATAROWVIEW (if binding is a Data Reader) it is an iDataRecord.) So directly convert to DataRowView, will bring great improvements to performance. . <% # ctype (DATAROWVIEW) .row ("Field Name")%> * Binds for the data Use <% # ctype (container.DataItem, DataRowView) .Row ("Field Name")% ". When the amount of data is large, it can increase several times the speed. Pay attention to 2 aspects: 1. You need to add <% @ Import namespace = "system.data"%>. 2. Note the case of the field name (pay special attention). If the inquiry is inconsistent, in some cases, it will result in <% # DataBinder.eval (Container.DataItem, "Field Name")%> Still slower. If you want to further increase the speed, you can use the method of <% # ctype (data ).Row (0)%>. However, its readability is not high. The above is the Writing of VB.NET. In C #: <@% ((DATAROWVIEW) Container.DataItem) ["Field Name"]%> The easiest way to view each execution process state: The TRACE attribute of its page can view the details. As shown in Figure Next, use stored procedures, query statements, page optimization, control selection, and optimization of server controls