First, return multiple data sets
Check your access to the database to see if there is a request to return multiple times. Reduce the number of times each of your applications can respond to the number of requests per second. By returning multiple result sets in a single database request, you can reduce the time with the database communication, allowing your system to have scalable, or reduce the workload of the database server response request.
If you are using a dynamic SQL statement to return multiple datasets, then I suggest you use the stored procedure to replace the dynamic SQL statement. This is a bit disputed when the business logic is written to the stored procedure. But I think that writing business logic to the stored procedure can limit the size of the return result set, reduce traffic of network data, and not in filtering data in the logic layer, this is a good thing.
Returns a strong type of business object with the ExecuteReader method of the SQLCommand object, then call the NextResult method to move the data set pointer to locate the data set. Example 1 demonstrates an example of returning multiple ArrayList power type objects. Only from the database returns the data you need can greatly reduce the memory consumed by your server.
Second, paginize the data
ASP. NET's DataGrid has a very useful feature: paging. If the dataGrid allows page, it only downloads the data of a page at a certain time, in addition, it has a data paging for the navigation navigation bar, which allows you to choose to browse a page, and only download one page each time .
But it has a small shortcoming, that is, you must bind all the data into the DataGrid. That is, your data layer must return all the data, then the DataGrid is displayed according to the current page to filter out the data required by the current page. If there is a collection of 10,000 records to make panesses with DataGrid, assuming that only 25 data per page is displayed per page, it means that 9975 data have been discarded each request. Each request must return such a large data set and the performance impact on the application is very large.
A good solution is to write a pagination stored procedure, and Example 2 is a paging store process for the Northwind database ORDERS table. You only need to pass the preceding page number, the number of the number of entries displayed per page comes in, and the stored procedure returns the corresponding result.
On the server, I specifically wrote a paging control to handle the paging of the data. Here, I used the first method, and I returned two result sets in a stored procedure: the total number of data records and the required result set.
The total number of records returned depends on to execute queries, for example, a WHERE condition can limit the size of the returned result set. Because the total number of pages must be calculated according to the size of the data set record, the number of records of the result set must be returned. For example, if there is a total of 100,000 records, if you use WHERE conditions, you can filter only 1000 records, and the paging logic of the stored procedure should know that the data that needs to be displayed.
Third, connect pool
Connecting to your application with TCP and the database is an expensive thing (very time-time), Microsoft's developers can use the connection pool to use the database connection. For each request, use TCP to connect to a database, the connection pool only has a new TCP connection only when there is no valid connection. When a connection is turned off, it will be placed in the pool, which will still be connected to the database so that the number of TCP connections to the database can be reduced.
Of course, you have to pay attention to those forgotten, you should turn off it immediately after each connection is used. I want to emphasize: No matter what person says that the GC (garbage collector) in Net Framework will always call the Close or Dispose method to connect to the connection object after you have finished the connection object. Don't expect CLR to turn off the connection within your imagination, although the CLR will eventually destroy the object and close the edge, but we can't determine when it will do this.
To optimize the connection pool, there are two rules, first, open the connection, process data, and close the connection. If you have to open or close the connection multiple times in each request, this is always open, and then transmit it to each method. Second, with the same connection string (or use the same user ID, when you use integrated authentication). If you don't use the same connection string, if you use the login user connection string, this will not be able to optimize the connection pool. If you use an integrated argument, because there are many users, you can't make full use of the optimization of the connection pool. The .NET CLR provides a data performance counter that is very useful when we need to track the performance characteristics, and of course, the connection pool is also tracked. No matter how often your app is in another machine, such as a database, you should focus on optimizing the time, receiving and sending data, and the number of times between going back and forth. Optimize each processing point in your application, which is the starting point for improving the performance of your application.
The application layer contains an instance of the data layer to transmit data to the corresponding class, and the logic of service processing. For example, in Community Server, you have to assemble a ForuMS or Threads collection, then apply business logic, such as authorization, more important, here you want to complete the cache logic.
Fourth, ASP. Net Cache API
Before writing the application, the first thing you have to do is to make the application maximize the cache function using ASP.NET.
If your component is to run in the ASP.NET application, you can use the System.Web.dll reference to your project. Then use the httpruntime.cache attribute to access Cache (you can also access by Page.cache or HttpContext.cache).
There are several rules of the following cache data. First, the data may be used frequently, which can be cached. Second, the access frequency of the data is very high, or the access frequency of a data is not high, but its living cycle is very long, and such data is preferably cacked. The third is a problem that is often ignored, and sometimes we cach too much data, usually on a X86 machine, if you want to cache more than 800M, there will be a memory overflow. Therefore, the cache is limited. If you change your name, you should estimate the size of the cache set, limit the size of the cache set within 10, otherwise it may come out. In ASP.NET, if the cache is too large, there will be an error in overflow, especially if the big DataSet object is cached.
There are several important caching mechanisms you have to understand here. The first is the cache implements "a Least-Recently-use-use Algorithm), when the cache is less, it automatically enforces the useless cache. Second, "Condition Dependency" forced clearing principles, the conditions can be time, keywords, and files. Take time as the most commonly used. Increase a stronger condition in ASP.NET 2.0, that is, database conditions. When the data in the database changes, the cache is enforced. To understand the database criteria, see Dino Esposito in MSDN Journal 2004 Cutting Edge column articles. The cache architecture of ASP.NET is shown below:
V. Pre-request cache
In front, I mentioned that even if we only make a small performance improvement in some places, you can get big performance improvement, I really like to use pre-request caches to enhance the performance of the program.
Although the Cache API is designed to save a certain period of time, the pre-request cache is just a request for a certain period of time. If a request is high, and this request only needs to extract, apply, modify, or update the data once. Then you can prevail the request. We give an example to illustrate. In the Forum application in the CS, server controls for each page require a customized data for determining its skin (SKIN) to determine which style sheet and other personalized things. Some of this may have to save for a long time. Otherwise, if you have Skin data, it only needs to be applied, and you can use it.
To implement the prequet cache, use the ASP.NET's HTTPCONText class, an instance of the HTTPContext class is created in each request, and anywhere during the request can be accessed via HttpContext.current property. The HTTPContext class has an Items collection property that is added to this collection during the request during the request. Like you to access frequencies with cache, you can use HTTPCONTEXT.ITEMS to cache basic data to be used for each request. It's simple behind it: We add a data to httpContext.Items, then read data from it.
Six, background processing
By the above method Your application should run very fast, is it? But at some point, a very time-consuming task may be executed in one request in the program. If you send an email or check the correctness of the submitted data.
When we integrate the ASP.NET Forums 1.0 in the CS, we will find that it will be very slow when you submit a new post. Every time you add a post, the application first wants to check this post, then filter it with the "Badword" filter, check the image extra code, make the index of the post, add it to the right queue In, verify its accessories, finally, send an email to its subscriber mailbox. Obviously, this workload is large.
The result is that it spends a lot of time in the index and sends a message. Index for posting is a very time-consuming operation, while sending an email to the SMTP service, then send an email to each subscriber, as the subscriber increases, the time to send the message will be more long.
Indexing and email do not need to trigger each request, ideal, we want to process these operations, each time you only send 25 emails or send all new mail to all the new mail every 5 minutes. We decided to use the same code as the database original cache, but failed, so he had to return to VS.NET 2005.
We found the Timer class under System.Threading namespace, which is very useful, but few people know that the web developers know less. Once he built the instance of this class, every time a specified time, the Timer class calls the specified callback function from one of the threads in a thread. This means that your ASP.NET application can run without requested. This is the process after processing. You can make indexes and email work in the background, rather than being performed at each request.
There are two problems in the background running technology. The first is that when your application domain is uninstalled, the Timer class instance will stop running. That is, the callback method will not be called. In addition, because there are many threads in each process of the CLR, it will hardly let Timer get a thread to perform it, or can execute it, but will delay. The ASP.NET layer should use this technique as little as possible to reduce the number of threads in the process, or only requests a small part of the thread. Of course, if you have a lot of asynchronous work, you can only use it. There is not enough space here, you can download the sample program from http://www.rob-howard.net/, download the sample program of BlackBelt Teched 2004.
Seven, page output cache and proxy service
ASP.NET is your interface layer (or should be), which contains pages, user controls, server controls, and httpmodules and what they generate. If you have an ASP.NET page to output HTML, XML, IMGAE, or other data, you have the same output content for each request you use code, you must consider using the page output cache.
As long as you simply copy the following line of code to your page:
<% @ Pageoutputcache varybyparams = "none" DURATION = "60"%>
You can effectively utilize the page output cache content generated in the first request, and regenerate a page content after 60 seconds. This technique is actually achieved with some low-level Cache APIs. With the page output cache, there are several parameters that can be configured, as the VaryByParams parameters described above, indicating when to trigger the criteria, or specify a cache output in the HTTP GET or HTTP POST request mode. For example, when we set this parameter for VarybyParams = "Report", default.aspx? Report = 1 or Default.aspx? Report = 2 The output of request is cached. The value of the parameter can be multiple semicolon separated parameters.
Many people don't realize that when using the page output cache, the ASP.NET also generates the HTTP header (HTTP header) saved in the downstream cache server, which can be used in Microsoft Internet Security and the response of the acceleration server. speed. When the header of the HTTP cache is reset, the requested content is slowed in network resources, and when the client requests the content again, it will not obtain content from the source server, and obtain the content directly from the cache.
Although using the page output cache does not improve your application performance, it reduces the number of times that loads the cached page content from the server. Of course, this is limited to a page that cache anonymous user accessible. Because once the page is cached, you cannot perform an authorization operation.
Eight, use IIS6.0 kernel caching
If your application is useless in IIS6.0 (Windows Server 2003), then you lose some ways to improve application performance. In the seventh method, I tell the method of increasing the performance of the application with a page output cache. In IIS5.0, when a request comes to IIS, IIS will transfer it to ASP.NET, when applying the page output cache, Httphandler in the ASP.NET will receive the request, httphandler from the cache Take it out and return.
If you use IIS6.0, it has a very good feature is Kernel Caching, and you don't have to modify any code in the ASP.NET program. When ASP.NET is connected to a cached request, IIS's kernel cache will get a copy of its copy from the cache. When a request is sent from the network, the KERNEL layer will get the request. If the request is cached, the cached data is returned directly, so it is completed. This means that when you use IIS's kernel caching to cache the page output, you will get unbelievable performance improvements. One thing is developing VS.NET 2005 ASP.NET, I am a dedicated ASP.NET performance program manager, my programmer uses this method, I read all daily report data, found the result of kernel model caching Always fastest. A common feature is that the network's request and a large amount of response, but IIS only occupies 5% CPU resources. This is amazing. There are many reasons for you to use IIS6.0, but kernel cashing is the best one. Nine, compressed data with GZIP
Unless your CPU usage is too high, it is necessary to use the skill of enhancing server performance. The method of compressed data with Gzip can reduce the amount of data you sent to the server, or increase the running speed of the page, and also reduces the traffic of the network. How to better compress data depends on the data you want to send, and the client's browser is not supported (IIS sends the data after Gzip compression to the client, the client should support Gzip to resolve, IE6.0 And Firefox is supported). So your server can respond to some requests per second. Similarly, you also reduce the amount of data sent to the response, and you can send some requests.
Good news, Gzip compression has been integrated in IIS6.0, which is better than Gzip in IIS5.0. Unfortunately, Gzip compression is enabled in IIS6.0, you cannot set it in the properties dialog of IIS6.0. The IIS development team has developed Gzip compression, but they forgot to enable administrators to enable it in the administrator window. To enable GZIP compression, you can only modify its configuration in the XML configuration file of IIS6.0.
In addition to reading this article, I have to look at the << IIS6 compression >> one article (http://www.dotnetdevs.com/articles/iis6compression.aspx); another article introducing the ASPX compressed basics. Enable aspx compression in IIS. However, pay attention to the dynamic compression and kernel cashing is mutually exclusive in IIS6.
Ten, server control ViewState
ViewState is a feature in ASP.NET that is used to save a status value you want to use in a hidden domain. When the page is back into the server, the server is resolved, checks, and apps the data in ViewState to restore the control tree. ViewState is a very useful feature that can persist in the client's state without cookie or server memory. Most of the server controls use ViewState to persist in status values for elements interacting with users in the page. For example, the page number used to save the current page for paging.
It will bring some negative impacts with ViewState. First, it increases the response of the server and the time of the request. Second, the time of serialization and reverse sequencing data is added each time. Finally, it also consumes more memory of the server.
Many server controls tend to use ViewState, such as well-known DataGrid, and sometimes do not have to be used. If you are allowed by default, you can use ViewState if you don't want to use ViewState, you can turn off it at the control or page level. In the control, you can set the enableViewState property to false; you can also set it in the page, make it extends to the entire page: <% @ Page EnableViewState = "false"%>
If the page does not need to return or each request page is just a control. You should turn off the viewState in the page level.
to sum up
I just offer me a few techniques I think to help improve the performance of the ASP.NET application. This article mentioned in this article is just a start, more information, please refer to "Improving ASP.NET Performance "book. Only through your own practice, you can find the most helpful skills for your project. However, in your development journey, these techniques can play some guiding roles. These are not absolutely useful in software development because each project is different.