Accelerate your website

xiaoxiao2021-03-05  36

For a daily visit to a million-level website, the speed will become a bottleneck. In addition to optimizing the application of the content publishing system, if the output result of the dynamic page that does not require real-time update is converted into a static web page, the increase in the speed will be significant, because a dynamic page is often more than static The page is 2-10 times, and if the content of the static web page can be cached in memory, access speed is even more than 2-3 order levels than the original dynamic web page.

Dynamic cache and static cache are based on sites planning to accelerate in reverse proxy

Based on Apache MOD_PROXY, the reverse proxy speeds up SQUID-based reverse proxy speeds up the cached page design background content management system page output complies with the cache design, so that performance issues can be handed over to the front desk cache server to solve It is therefore greatly simplified the complexity of the CMS system itself.

Comparison of static cache and dynamic cache

There may be two forms of the cache of the static page: the main difference is whether the CMS is responsible for the cache update management of related content.

Static caching: It is a static page of the corresponding content at the same time, such as March 22, 2003. After entering an article through the background content management interface, the administrator immediately generates http: // www. MyDOT.ORG/Tech/2003/03/22/001.Html This static page is updated to update the link on the relevant index page.

Dynamic cache: After the new content is released, it is not prescribed to the corresponding static page until it issues a request for the corresponding content, if the front cache server does not find the corresponding cache, the background system will issue a request, the background system generates The static page of the corresponding content may be slower when the user visits the page, but it will be directly accessed.

If you go to ZDNET and other foreign websites will find that the Vignette content management system they use is available in the Vignette content management system: 0,22342566,300458.html. In fact, 0,22342566,300458 is a multiple parameter that is separated by commas: After the first access is not found, it is equivalent to generating a DOC_TYPE = 0 & DOC_ID = 22342566 & DOC_TEMPLATE = 300458 in the server side, and the query result will Static page for generated cache: 0, 22342566, 300458.html

Disadvantages of static cache:

Complex trigger update mechanism: These two mechanisms are very suitable when the content management system is relatively simple. But for a relatively complex website, the logical reference relationship between the page is a very and very complicated issue. The most typical example is a news that the news should appear in the news home and related three news topics. In the static cache mode, each new article is sent, in addition to this news content itself, the system needs to trigger the system. The gear generates multiple new related static pages, which often become one of the most complex parts of the content management system. Batch update of old content: By static cache released, it is difficult to modify for previously generated static pages, so that the user has access to the old page, the new template does not take effect at all. In dynamic cache mode, each dynamic page only needs to be careful, and the relevant other pages can be automatically updated, which greatly reduces the need for design-related pages to update triggers.

I used to use similar ways before making small applications: After the first access, the query result of the database is used locally, and the next request will check if there is a cache file in the local cache directory, thereby reducing access to the background database. . Although this can also carry a relatively large load, such content management and cache management integration is difficult to separate, and data integrity is not well saved, and the content is updated, the application needs to put the corresponding content File delete. But such a design is often necessary to make a certain distribution of the cache directory when the cached file is many, otherwise the file node in a directory exceeds 3000, and the RM * will be wrong. At this time, the system needs to be divided again, breaking complex content management systems into: content input and cache these two relatively simple system implementations.

Backstage: Content management system, focus on content release, such as complex workflow management, complex template rules, etc. ... Front desk: Cache management can be implemented using cache system

So after division of labor: Content management and cache management 2, no matter which one is available, it is very large: software (such as the front desk 80 port uses Squid to cache the background 8080 content release management system), cache hardware, even Hand give a professional service provider like Akamai.

Sites planning

One uses Squid to do a web acceleration HTTP Acceleration scheme for multiple sites:

The original site plan may be like this: 200.200.200.207 www.mydot.org200.200.200.208 news.mydot.org200.200.200.209 bbs.mydot.org200.200.200.205 images.mydot.org

In the design of the cached server: All sites point to the same IP: 200.200.200.200/201 through external DNS (2 sets for redundant backups) Working principle: External requests come over, set the cache Steering parsing according to the configuration file. In this way, the server request can be forwarded to the internal address we specified.

In terms of processing multi-virtual host steering: MOD_PROXY is simpler than Squid: You can turn different services to different ports of multiple IPs in the background. Squid can only be disabled by disabling DNS parsing, and then forwards the address based on the local / etc / hosts file, and multiple servers must use the same port.

Use reverse proxy to accelerate, we can not only get performance improvements, but also get additional security and flexibility:

Configuration flexibility: You can control the DNS resolution of the background server on the internal server. When you need to migrate adjustments between the server, you don't have to modify the external DNS configuration, just modify the adjustment of internal DNS implementation services. Data security has increased: all background servers can be easily protected in the firewall. Background application design complexity reduction: I originally needed to establish a special picture server image.mydot.org and load relatively high application server bbs.mydot.org separation, all reception requests are cached in the reverse proxy acceleration mode. Server: In fact, it is a static page. In this way, you don't have to consider the picture and the application itself. It also greatly reduces the complexity of the design of the background content distribution system. It is also convenient for data and applications. Maintenance and management of file systems.

转载请注明原文地址:https://www.9cbs.com/read-34091.html

New Post(0)