Search engine and website directory structure

xiaoxiao2021-03-06  51

Search engine and website directory structure

For a website, can the search engine retrieve the webpage in its subdirectory? For example, if you http://www.google8.net/archives/000062.html, is the search engine index? The answer to the perfunctory thing is "meeting". For subdirectory included in a site, as long as the link to the layer of the layer is provided with a navigation configuration and URL structure that the search engine can follow up, then all search engines will travel to the subdirectory. The network directory structure is ideally, especially for a smaller site, its directory structure should be single (FLAT), that is, the actual web page does not exist or only one layer of subdirectories. For large sites, two to three layers are ideal. From the perspective of the search engine, a single directory structure is optimal. Except for graphics, scripts, cgi-bin and style. They should be placed in subdirectories instead of rootting.

The URL structure also clearly shows the search engine and your visitors in your website, what you think is the most important. In other words, if you think there is a page is very important, the URL of the web page should use the top URL without placing it in a subdirectory.

The URL structure of the top webpage is generally as follows:

http://www.google8.net/google.html

The URL structure containing a layer directory is generally:

Http://www.google8.net/archives/000063.html where: Google8.net is domain name, Archives first-level sub-directory name, 000063.html is a web name.

The structure of the URL containing two layers of subdirectors is generally:

Http://www.wuyue.cn/curtain/2/Product1.html where: wuyue.cn is domain name, Curtain is a primary sub-directory name, 2 is the secondary sub-directory name, Product1.html is the secondary subdirectory Web name, etc., so on.

In the search for a website, as long as your website provides a navigation configuration and URL structure that the search engine can follow up, the search engine usually traversed at least three layers. However, a more important point than the number of subdirectories is: Does there are external links from other websites in your subdirectory. If your website has a fourth-level directory, it provides a very important content in this directory, and it contains a lot of external links, then you can rest assured that the search engine will make this fourth layer directory for you. Retrieved.

Search engine marketing tactics

In the marketing of search engines, there are many search engine marketers like to use such a tactics: Since they know that the search engine will automatically retrieve multi-layer list, they deliberately create a subdirectory with composite keywords / phrases. Make sure the search engine can see this goal keyword. But in my opinion, this trick does not actually have any practical effects, and it is not advisable.

For example, a company that sells organic tea, if you use the above strategy, you may have the following URL and directory structure:

http://www.tranquiliteasorganic.com/oolong-tea/oolong.html

among them:

1. Tranquiliteasorganic.com is a domain name.

2. OOLONG-TEA is a primary subdirectory name that contains keyword "OOLONG TEA" in its domain name to separate.

3. OOLONG.HTML is the web name in the secondary directory.

For the URL structure used by subdirectory http://www.tranquiliteasorganic.com/oolong-tea/oolong.html] and top URL http://www.tranquiliteasorganic.com/oolong.html, which one is better? ? For me, I won't be purely to get the semapy structure in the search engine. The reason is that the use of keywords in the domain name or URL or is not important, or the effect is inexpensive. My answer depends on what kind of website is this. If organic oolong tea has a lot of species, and this website provides a considerable number of unique and high quality web pages about oolong tea, then I recommend using subdirectory structures. Similarly, in order to ensure the consistency and ease of use of the website, I also hope that they can set up subdirectory for all types of tea provided. But since I am hard to believe that there will be a large number of unique and high quality of the content of Oolong tea, so I suspect that this subdirector is necessary.

Use the Robots Exclusion Protocol protocol (reject the Robots access protocol)

On a database-driven website, it is quite common in the different subdirectory, because this can improve the user experience.

We use the tea site above as an example, assume that the site has set up different subdirectories for each tea, and provides a large number of unique and high-quality web pages, so on oolong tea, green tea (Green TEA) And tea sets, their URL structure is as follows:

1. Oolong tea page: http://www.tranquiliteasorganic.com/ooldong-tea/oolong.html

2. Green tea page: http://www.tranquiliteasorganic.com/green-tea/green.html

3. Tea Tools page: http://www.tranquiliteasorganic.com/tea-accessories/accessories.html

If the site also provides a bulk oolong tea and green tea, then the next website about the tea tea set, it is also logically placed in the three directories of Oolong Tea, Green Tea and Tea. From the perspective of ease of use and user experience, this is not a good strategy. However, for search engines, they tend to treat such content as redundancy. The search engine does not like a reason for more than a multi-database-driven site is that they often get the same content again and again.

Thus, if the tea tea set This page exists in the above three primary subdirectories, whether the search engine will think that this is redundant, and may be punished because this site provides such content? It is most likely that the search engine only shows that the website contains many links to links, without displaying other pages on the website.

At the same time, there are also many search engine marketers that lack professional ethics to excessively use this strategy, generating a large amount of redundant content on exactly the same information. Therefore, it is impossible to be punished by the search engine to be spam.

In order to make the website 100% security, you can put a plain text file robots.txt (Robots Exclusion Protocol) in the redundant content of the website, declare that the site does not want to be accessed by the Robot in this file, so You can qualify the search engine to the search range for your website. However, you also need to carefully analyze the statistics of the site to see which subdirectories are most common, do not put the robots.txt file for such subdirectors.

In the above case, two problems were solved using the robots.txt file. First, it conveys the search engine you not to deliberately transmit redundancy. Second, since the relevant accommodation is still valid in the appropriate subdirectory, there is no negative impact on the user experience.

in conclusion:

In general, the search engine does not have problems on the search of subdirectories. If you find that you can divide your website into a subdirectory structure, you can provide users with a better user experience, then you can do this. But don't just create a subdirectory to cause attention to the search engine. There are many strategies to achieve such purposes, but they don't need to spend a lot of time, but also bring a better investment return (ROI) to your website.

The problem proposed by this user has led to the problem of triggering intense debates in the search engine industry: When a website uses subdirectories, subdomain (Subdomains) or mini-site is the most appropriate? Whether the website owner should use the target key phrase to create the URL of our website? Whether the name of the subdirectory should contain key phrases? That is later.

转载请注明原文地址:https://www.9cbs.com/read-109758.html

New Post(0)