Original source: ASP.NET: 10 Tips for Writing High-Performance Web Applications
This article discusses: Common ASP.NET performance myths useful ASP.NET performance techniques and some recommended buffers in ASP.NET to process databases and use ASP.NET to use the following technology: ASP.NET, .NET Framework IIS
Write a web application with ASP.NET that its easy degree is incredible. It is so easy, so that many developers don't have to spend much time to build their applications. In this article, I will give 10 techniques to write high-performance web applications. My comments are not limited to ASP.NET applications, because they are just a subset of web applications. This article is not the authoritative guide for web application performance adjustment - this aspect can be written into a book. Instead, this article can be regarded as a good start. I often go to rock climbing before the sleep is forgetting. Before rock climbing, I always have to look at the lines in the guidebook and read the recommendations and advice left by those who come here. However, no matter how much the guidebook is much better, you must take practical actions before trying to have a specific challenging climb. Similarly, before you face the performance problem or the operation of a high-throughput site, you can only try to write a high-performance web application. We personal experience comes from the Microsoft ASP.NET team engaged in the underlying architecture manager, runs and manages www.asp.net, and assists the experience in the architecture Community Server process, and Community Server is the next next to ASP.NET applications. Version (it integrates ASP.NET Forums, .text, and Ngallery to a platform). I am sure that these skills will benefit from you. You should consider separating the application into several logical layers. You may have heard of a physical architecture of 3-layer (or N-layer). They are typically a system structure pattern that transfers and / or hardware to physical division of features. More hardware can be added when the system needs to be telescopically. However, it always avoids performance issues associated with process and machine busy. So, no matter when, as long as it is possible, you must run the ASP.NET page and its related components together in the same application. Due to the boundary separation between the code and the layers, using a web service or remote call will reduce the performance of more than 20%. The data layer is slightly different because the database usually uses specialized hardware. However, the processing cost of the database is still very high, so the performance of the data layer should be the first place to pay attention to when optimizing the code. Before working on the performance of your application, you must analyze the application to determine the problem. Get critical performance counter values (if the value of performance counters that implement the time of garbage collection, it is very important to find where the application is mostly time consuming. You can often find time consumption with intuition. There are two types of performance improvements described herein: large optimization, such as using ASP.NET CACHE, and micro optimizations that are continuously repeated. These micro optimization are sometimes interesting. Small changes to the code will cause great movement, resulting in thousands of calls. For large optimization, you may see the overall performance of the overall performance. For micro-optimization, a given request may be only adjusted by milliseconds, but according to the total number of requests per day, the improvement of its results may be huge.
Data layer performance
When adjusting the performance of an application, there is a simple trill stone, you can use it to follow the order: Check the code to access the database? If so, how long is it to access once? Note that the same test can also be applied to code using a web service or remote call, but we do not involve this in this regard. If you must have a request for a database in a specific code process and to examine other aspects, if you want to give a priority to the string, then put it, you will do it in accordance with the above-mentioned priority. Unless you have an exotic performance problem, your time should be used in trying to optimize the amount of time spending with the database, the amount of data returned, and how long is a communication over the database. With these general information, let's take a look at the ten techniques that can help you improve the application performance. I will start from the change of the most significant effect. Tips 1 - Return to multiple result sets
Review your database code to see if there is more than one-time access request for the database. This reduces the number of requests per second per round-trip database to process. You can reduce the overall time with database communication by returning multi-result sets in a single database request. At the same time, you will also make the system more scalable because you reduce the burden on the database server processing request. Although you can use dynamic SQL to return multi-result sets, I prefer to use the stored process. Whether it is a problem that the business logic resides in the stored procedure, but I think if the logical can constrain the logic in the stored procedure (reducing the size of the data set, the time transmitted on the network, and the logical layer do not have to worry ), This is a good thing. Use the SQLCOMMAND command instance and its ExecuteReader method to handle strong types of individual business classes, you can move the result set pointer forward by calling nextResult. Figure 1 depends processing several ArrayLists example sessions with a type of tape. The data you need from the database will also reduce the allocation of memory on the server.
Tips 2 - Page Data Access
ASP.NET DataGrid provides very good capabilities: Data page support. When the paging function in the DataGrid is enabled, only a fixed number of records is displayed each time. In addition, the paging user interface also appears for navigation record at the bottom of the DataGrid. The paging user interface allows you to display the recorded records forward, displays a fixed number of records at once. There is a beautiful infealth that needs to be bonded to this Raster Control (GIRD) with DataGrid pagination. For example, your data layer must return all data, then DataGrid will filter out all displayed records according to the current page. When you make panesia via DataGrid, if there is a 100,000 records returned, then 99,975 records will be discarded each request (assuming page size is 25). When the number of records is increasing, the performance of this application will suffer because the data to be sent is more and more. A good way to write better paging code is to use the stored procedure. Figure 2 demonstrates an example of a stored procedure paging in a Northwind database. Very simple, as long as you pass the index and page size in the page. The corresponding result is first calculated and then returned. In Community Server, we have written several paging controls to complete the data pagination. You will see that I use the ideas discussed in the skill 1, returning from a stored procedure: The total record number and the data requested. The number of returned total records depend on the query executed. For example, a WHERE clause can be used to constrain the returned data. In order to calculate the total number of pages displayed in the paging user interface, the total number of records returned must be known. For example, if there is 1,000,000 records, use a WHERE clause to filter 1,000 records, the paging logic must know the total number of records to be correctly rendered in the paging user interface. Tip 3 - Connection pool
The TCP connection between the WEB application and SQL Server is an expensive operation. Microsoft's developers use the connection pool technology for a long time, this technology enables them to reuse the connection of the database. Instead of establishing a new TCP connection each time, the new connection is only established when the connection is not connected in the connection pool. When the connection is closed, it is returned to the connection pool, where it still keeps the connection to the database, opposite to the full disconnection TCP connection. Of course, you need to make preventive connections. When you have finished processing, be sure to turn off. Reaffirly, no matter how people boast the garbage collection characteristics in Microsoft .NET framework, whenever you have finished processing, you must explicitly call the Close or Dispose method of the connection object. Do not count on the public language runtime (CLR) to clear and close the connection for you. The CLR eventually destroys classes and forcibly closes the connection, but you can't guarantee that the garbage collection of the object will work. In order to fully use the connection pool, there are several rules that must be in mind. First, open the connection, proceed, and then close the connection. Would rather the connection to each request open and close multiple times, or keep the connection open state and pass it between different methods. Second, use the same connection string (if you use the integrated authentication check, then use the same thread identity). If you do not have the same connection string, for example, the connection string is customized according to the login user, you will not be able to get the same optimization value provided by the connection pool. When analog large users, if you use an integrated authentication, your connecting pool will be significantly reduced. The .NET CLR Data Performance Counter is very useful when trying to track any performance issues related to the connection pool. No matter when, as long as your app is connected to resources running in other processes, such as a database, you should optimize the time, sending and receiving data, and the number of round-trip times for time consists of time, sending and receiving data. In order to achieve better performance, it should be pretty when it is fully immersed in any kind of busy process in the application. The application layer contains the connection of the data layer and converts data into a meaningful class instance and logic of service processing. Take Community Server as an example, you have to deal with Forums and Threads collection; and business rules such as applying licenses; especially the cache logic is also implemented. Tips 4 - ASP.NET CACHE API
One of the heads to do before writing code is to maximize the application layer and discover the cache feature of ASP.NET. If your component is running within the ASP.NET application, you only need to reference system.web.dll in the application engineering. When you need to access Cache, use the httpruntime.cache property (the same object can also be accessed by Page.cache and HttpContext.cache). There are several guidelines for buffering data. First, if the data can be used multiple times, the buffer is a good post selection. Second, if the data is a given request or user is a general data rather than dedicated data, then it is best to select buffer. If the data user or request is dedicated, if the save period is long, it may not be used frequently, then it is still necessary to use buffers. Third, a criterion that is often ignored is sometimes buffering too much. In general, in the X86 machine, in order to reduce the risk of insufficient memory, do not exceed 800MB private bytes in a certain process. Therefore, the buffer should have a upper limit. In other words, you may reuse a calculation result, but if there is 10 parameters, you may try to buffer 10 replacement, so you may bring you trouble. The most common fault tolerance provided by ASP.NET is insufficient memory caused by overlay buffering, especially large data sets. There are several important features that Cache must be understood. The first is that Cache implements the recent minimum use (Leastly-useed) algorithm, allowing ASP.NET to force Cache Clear operation - if the available memory drops to a low level - automatically removes the items that are not used from Cache. The second is that Cache supports dependencies expiration feature, which enforces time, key value, file failure. Time is often used, but ASP.NET 2.0 introduces a more powerful failure type: database buffer failure. That is, when the data in the database changes, the entry in the buffer will be automatically deleted. For more information on Database Buffing Failure See Dino Esposito in the MSDN Magazine 2004 Cutting Edge column articles in July 2004. See Figure 3 for this buffering architecture. Figure 3 asp.net cache
Tips 5 - PER-Request Caching
In front of this article, I have mentioned that small changes to frequent code blocks may have a large, overall performance improvement. I called one of the pre-request caching. Since the Cache API is designed to buffer long-term data or until a certain condition is satisfied, the prequight cushions are intended to buffer the data during the request. The specific code process is frequently accessed each time, but the data only needs to be picked up, applied, modified, or updated once, so that it is too theoretical, or let us see a specific example. In Community Server's Forums (Forum) applications, each server control used on a page requires personalized data to determine the use of that skin and model page, and other personalized data, some of which can be buffered for a long time. However, some data, such as the skin used for the control, is only picked up once in a single request and is reused multiple times during the request. In order to complete the pre-request buffer, use ASP.NET HTTPCONTEXT. An instance of HTTPContext is created with each request and can access it anywhere during that request execution through the httpcontext.current property. The HTTPContext class has a special items set attribute, which is added to the Items collection, just cached during the request. Just like you can use Cache to save frequently used data, you can use HTTPContext.Items to save only data used in a prequet request. The logic behind this background is simple: it is added to the HTTPContext.items collection when the data does not exist, and the data found in HTTPContext.Items is simply returned in the subsequent concurrent lookup. Skills 6 - background processing
Your code process should be as fast as possible, right? You may have repeatedly discovered that every request or every N request is high. E-mail or resolving and checking the validity of the input data is an example. When I regenerate ASP.NET Forums 1.0 and integrate it into Community Server, we find that the new post code process is very slow. Each time you add a post, the application must first make sure you have not repeated it, then you must use the "Badword" filter to resolve the post-name, the mark, and index, if necessary, you must add a post to the corresponding queue, valid for the attachment Sexual inspection, after the post is completed, send an E-mail notification to the subscription. Obviously, there is too much work here. We have found that most of the time is spent on index logic and send E-mail. Index post is a very time consuming operation, in addition, the built-in system.Web.mail feature is connected to the SMTP server and sequentially sequentially. When a specific post or the number of topic reserves increase, the execution time of the AddPost function is longer. Not every request requires an index message, we think it is best to concentrate in batch, and only index only 25 posts on or send one email every five minutes. The code we decided to use with the code I used in prototype database buffer failure, and finally it was also incorporated into Visual Studio 2005. The Timer class in System.Thread in the name space is very useful, but it is known for the .NET framework, at least the web developer is like this. Once created, Timer will call the specified callback function to call a certain thread in the thread pool at a customizable interval. This means that you don't have to enter the request to the ASP.NET application to make the code to implement, this is a case where the most suitable background processing. You can also do such a job such as an index or send an email in this background processing mode. Despite this, there are several problems in this technology. If your application domain is off, the timer instance will stop triggering its events. In addition, since the CLR has a hard ridge, that is, the number of threads per process is fixed, you may fall into a serious server load, which may not be threaded to process the timer, resulting in delay. In order to minimize the chances of this situation, ASP.NET reserves a certain number of idle threads in the process and only uses partial threads to handle the request. However, if you have a lot of asynchronous processing, there will be a problem. Due to the space limit, the code cannot be listed here, but you can download the digestible example from www.rob-howard.net. Among them, the slides and demo showing BlackBelt Teched 2004. Tips 7 - Page Output Cache and Proxy Server
ASP.NET is your representation layer (or should be); it consists of a page, user control, server control (httphandlers and httpmodules, and what they generate. If you have an ASP.NET page that produces an output, whether it is output HTML, XML, an image or any other data, and each request you run this code and generate the same output, it is best to choose to use the page output cache. Just add this line of code on the top of the page: <% @ page outputcache varybyparams = "none" duration = "60"%> You can effectively estimate the output for this page and reuse it in 60 seconds, one To this point in time, this page will be re-executed and will add the output again to the ASP.NET Cache. This behavior can also be done with some low-level programming APIs. Output cache has several settings that can be configured, such as the VaryByParams property. VarybyParams is not required, but allows you to specify an HTTP GET or HTTP POST parameter to change the cache. For example, Default.aspx? Report = 1 or Default.aspx? Report = 2 can simply set VaryByParam = "Report" to cache the output. Additional parameters are named and separated by a semicolon. When using an output buffer mechanism, many people don't understand the ASP.NET page and generate a set of downstream cache servers http headers, such as Microsoft Internet Security and Acceleration Server or Akamai used HTTP headers. When the HTTP buffer head is set, the document can be cached to these network resources, so that the client request does not have to return to the original server. However, using the page output cache does not make your application more efficient, but it can potentially reduce the server's load via the downstream cache technology to potentially reduce the server's load. Of course, this can only be asynchronous content; once the downstream cache, you will not be able to see any requests, or you can't achieve identity authentication to prevent access to it. Tips 8 - Run IIS 6.0 (if only used for kernel cache)
If you don't run IIS 6.o (Windows Server 2003), you will not get some major performance improvements in the Microsoft Web server. In tips 7, I talked about the output cache. In IIS 5.0, requests IIS and then reaches ASP.NET. When using a cache, httpmodule in the ASP.NET accepts the request and returns content from the cache. If you use IIS 6.0, there are some clever feature called kernel caches, which do not need to change any code to ASP.NET. When the ASP.NET is cached, the IIS kernel cache receives a copy of the cache data. When requesting from a network drive, a kernel one-level driver (no context transition to user mode) receives the request, if the cache, use the cache data to respond and execute. This means that when you use IIS kernel mode cache and ASP.NET cache, you will see unbelievable performance results. During developing Visual Studio 2005 ASP.NET, I am responsible for the program manager for ASP.NET performance. The work of developers is really great, and I am basically reading the report every day. The kernel mode caching results are always most interesting. A typical situation is that the request / response often makes the network saturation, but IIS runs only 5% of the CPU. It's amazing! Of course, IIS 6.o has some other reasons, but the kernel mode cache is an obvious reason. Tips 9 - use GZIP compression
Although using Gzip compression is not a must-have server performance technique (because you may see the Usage of Cups rose), it can reduce the number of bytes from the server. Thus the feeling of faster page and reduces bandwidth. Its compression is good depends on whether the transmitted data and whether the client browser supports this compression (IIS will only send data to a browser that supports Gzip, such as: IE 6.0 and firefox), so that the server can be More requests are handled in seconds. In fact, as long as you reduce the number of returned data, you can improve the number of requests per second. There is a good message is Gzip compression is the built-in feature of IIS 6.0, and better than it is used in IIS 5.0. However, if you want to enable Gzip compression in IIS 6.0, it may not be so convenient. If IIS's property dialog does not find where it is set. The IIS team built superior Gzip compression capacity in the server, but ignored the establishment of a management user interface that enabled compression characteristics. To enable GZIP compression mechanism, you must go deep into the IIS's XML configuration settings (which must be configured to be configured). By the way, I would like to thank the OrcSweb's Scott Forsyth to help me solve this problem on the orthol of www.asp.net servers in OrcsWeb. Instead of the entire process, it is better to read the article on the IIS6 Compression in this article. The Microsoft Knowledge Base also has an article on the enabled compression characteristics of ASPX: Enable Aspx Compression IN IIS. However, you must also note that dynamic compression and kernel cache are mutually exclusive in IIS 6.0 due to certain details.
Tips 10 - Visual Status of Server Controls
View state is a strange name for ASP.NET, which hides the input domain in the generated page to store some status data. When the page is sent back to the server, the server can be parsed to check its validity and apply this status data to the control tree of the page. The visual state is a very powerful capability because it allows the status to be continuously subjected to the client and does not require a cookies or server memory to store this state. Many ASP.NET Server controls use visual status to continuous the settings made during interaction with the page elements, for example, save the current page display page when paging the data. However, there are many disadvantages using a visual state. First, it increases the burden on the entire page when it is a service when requested. Some additional overhead is generated when serialization or reverse sequence is returned to the server's visual status data. The final visual state will increase the memory allocation of the server. The most famous server control is DataGrid, and it is too much to use the visual state, even when it is not needed. The ViewState property is enabled by default, but if you don't need it, you can close it on the page control level or page level. In a control, just set the enableViewState to false, or use the following global settings in the page: <% @ Page EnableViewState = "false"%> If there is no return in a page, or when the page is required each time It is a re-generating control, then you should disable a visual state on the page level. in conclusion
I have provided you with some techniques I think useful to write high-performance ASP.NET applications. As I said when I started at the beginning of this article, this is some primary guidelines, not the final conclusion of ASP.NET performance. (More information on improving ASP.NET application performance See: improving asp.net performance.) Only by finding the best way to find the best way to find the best way. In any case, how many of these techniques will benefit from you during your problem. In the software development process, each app has its unique side, nothing is absolute.
- Common performance myths
One of the most common myths is that the C # code is fast than Visual Basic code. Such a saying is that there is no feet, although there are some performance hindrances that have some C # in Visual Basic, such as explicitly declare types. But if you follow good programming practice, there is no reason to illustrate that Visual Basic and C # code cannot be executed almost the same performance. Simply put, the same code produces the same result. Another myth is that the background code is fast than the inline code, which is absolutely not established. Performance and your ASP.NET application code does not have a relationship, whether it is a background code file or inline in an ASP.NET page. Sometimes I prefer to use the inline code because the change does not generate the update cost as the background code. For example, using the background code must update the entire background DLL, then one may cause panic advocacy. The third myth is that the components are faster than the page. This exists in the classic ASP because compiled COM servers is much better than VBScript. However, for page and components are ASP.NETs for classes. Regardless of your code, the performance difference is not large in the form of the page or the separated component. It is only the form of organizational form to be grouped logically, there is no difference in performance. The last myth I want to clarify is to use a web service to implement the functions between two applications. The Web service should be used to connect a heterogeneous system or a remote access to system functions and behavior. It should not be used for internal connections of two identical systems. Although it is easy, there are many other options. The worst thing is to use the web service to communicate between the ASP and ASP.NET applications on the same server, and I have not annoying it. About the author Rob Howard is a Telligent Systems creator, which is good at high performance web applications and knowledge management and collaboration systems. Previously, ROB was employed in Microsoft and helped designed ASP.NET 1.0, 1.1 and 2.0. You can contact ROB through rhoward@telligentsystems.com. This article comes from the January 2005 Journal of MSDN Magazine
Appendix
Figure 1 Extracting Multiple Resultsets from a DataReader // read the first resultsetReader = Command.executeReader ();
// read the data from what resultSetWhile (Reader.Read ()) {support = populatesuppliersfromidataareader (reader);}
// read the next resultsetReader.nextResult ();
// read the data from what second resultSetWhile (Reader.Read ()) {product = populateproductsfromidataareader (Reader);
Figure 2 Paging Through the Orders Table CREATE PROCEDURE northwind_OrdersPaged (@PageIndex int, @PageSize int) ASBEGINDECLARE @PageLowerBound intDECLARE @PageUpperBound intDECLARE @RowsToReturn int
- First set the rowcountSET @RowsToReturn = @PageSize * (@PageIndex 1) SET ROWCOUNT @ RowsToReturn-- Set the page boundsSET @PageLowerBound = @PageSize * @PageIndexSET @PageUpperBound = @PageLowerBound @PageSize 1
- Create a Temp Table To Store The Select Resultscreate Table #pageIndex (Indexid Int Id Idity (1) Not Null, OrderId Int
- INSERT INTO THE TEMP TABLINSERT INTO #PageIndex (ORDERID) Select ORDERIDFROM ORDERSORDER by OrderID DESC
- Return Total CountSelect Count Count (Orderid) from Orders
- Return paged resultsSELECT O. * FROM Orders O, #PageIndex PageIndexWHERE O.OrderID = PageIndex.OrderID AND PageIndex.IndexID> @PageLowerBound AND PageIndex.IndexID <@PageUpperBoundORDER BY PageIndex.IndexID
End