Robots.txt and Robots Meta labels

xiaoxiao2021-03-06  49

Robots.txt and Robots Meta labels

Author: Ping Wen-sheng we know, the search engine has its own "search robot" (ROBOTS), and create their own database through these ROBOTS on the network continues to crawl along the data link on a Web page (http and generally src link) . For website managers and content providers, sometimes there are some sites, do not want to be disclosed by Robots. In order to solve this problem, two ways provide two ways: one is Robots.txt and the other is the Robots Meta tag.

First, Robots.txt

1, what is robots.txt?

Robots.txt is a plain text file that declares that the site does not want to be accessed by Robots in this file, so that some or all of the content of the site can be included in the search engine, or specifying the search engine only Content.

When a search robot accesses a site, it will first check if there is robots.txt in the root root of the site. If you find it, the search robot will determine the scope of the access by the content in this file. If the file does not exist, then Search Robots will be grabbed along the link.

Robots.txt must be placed under the root of a site, and the file name must be lowercase.

Website URL

The corresponding Robots.txt's URL

http://www.w3.org/

http://www.w3.org/ Robots.txt

http://www.w3.org:80/

http://www.w3.org:80/ Robots.txt

http://www.w3.org:1234/

http://www.w3.org:1234/ robots.txt

http://w3.org/

http://w3.org/ Robots.txt

2, robots.txt

The "robots.txt" file contains one or more records that are separated by the space line (with Cr, Cr / NL, or NL as the end value), each record format is as follows:

": .

In this document, you can use # to annotate, specific usage methods, and practices in UNIX. Records in this file typically start with a row or multi-line User-Agent, and there are several Disallow lines, and the details are as follows:

User-agent:

The value of this item is used to describe the name of the search engine Robot. In the "robots.txt" file, if there are multiple User-Agent records, there are multiple Robot that will be subject to the limit, at least the file, at least There is a User-agent record. If the value of this item is *, the protocol is valid for any machine, in the "Robots.txt" file, "user-agent: *" records can only have one.

Disallow:

The value of this item is used to describe a URL that does not want to be accessed. This URL can be a complete path, or some, any URL starting with Disallow is not accessed by Robot. For example, "disallow: / help" does not allow search engine access to /Help.html and /Help/index.html, and "Disallow: / Help /" allows Robot to access /Help.html, not access / help / index .html.

Any disallow record is empty, indicating that all parts of the site allow access, in the "/ ROBOTS.TXT" file, at least one DisliW record. If "/Robots.txt" is an empty file, the site is open for all search engines Robot. Here are some of the basic usage of Robots.txt:

l Disable all search engines access to any part of the website:

User-agent: *

Disallow: /

l Allows all Robot Access

User-agent: *

Disallow:

Or you can also build an empty file "/Robots.txt" file "file

l Prohibition of all sections of all search engines to access the website (CGI-BIN, TMP, PRIVATE directory) in the following example)

User-agent: *

Disallow: / cgi-bin /

Dislow: / TMP /

Dislow: / private /

l Forbidden to access a search engine (BADBOT in the following example)

User-agent: Badbot

Disallow: /

l Allow access to a search engine (WebCrawler in the following example)

User-agent: Webcrawler

Disallow:

User-agent: *

Disallow: /

3, common search engine robot ROBOTS name

Name Search Engine

Baiduspider http://www.baidu.com

Scooter http://www.altavista.com

IA_archiver http://www.alexa.com

Googlebot http://www.google.com

Fast-webcrawler http://www.alltheweb.com

Slurp http://www.inktomi.com

Msnbot http://search.msn.com

4, Robots.txt Example

Below is a robots.txt of some famous sites:

http://www.cn.com/robots.txt

http://www.google.com/robots.txt

http://www.ibm.com/robots.txt

http://www.sun.com/robots.txt

http://www.eachnet.com/robots.txt

5, common robots.txt errors

l Upside down the order:

Error writing

User-agent: *

Dislow: Googlebot

The correct one should be:

User-agent: Googlebot

Disallow: *

l Placing multiple prohibition orders in one line:

For example, wrongly writing

Disallow: / CSS / / / CGI-BIN / / IMAGES /

The correct one should be

Disallow: / CSS /

Disallow: / cgi-bin /

DisLiveow: / Images /

L, there is a lot of spaces before line

For example to write

Disallow: / cgi-bin /

Although this is not mentioned in the standard, this way is easy to have problems.

l 404 redirect to another page:

When the Robot accesses many sites that do not set the Robots.txt file, they are redirected to another HTML page. At this time, Robot often handles this HTML page file in a way to process the Robots.txt file. Although there is no problem, it is best to put a blank robots.txt file in the site root directory.

l uses uppercase. E.g

User-agent: EXCITE

Disallow:

Although the standard is insatible, the directory and file name should be lowercase:

User-agent: GoogleBotdisAllow:

l There is only disallow in the syntax, no allow!

The wrong way is:

User-agent: baiduspider

Disallow: / john /

Allow: / jane /

l Forgot the slash /

Error write:

User-agent: baiduspider

Dislow: CSS

The correct one should be

User-agent: baiduspider

Disallow: / CSS /

The following gadgets specifically check the validity of the robots.txt file:

http://www.searchengineworld.com/cgi-bin/robotcheck.cgi

Second, ROBOTS META label

1, what is a Robots Meta label

The robots.txt file is mainly to limit the search engine access to the entire site or directory, while the Robots Meta label is primarily for one specific page. As with other Meta tags (such as using language, page description, keywords, etc.), Robots Meta tags are also placed in the of the page, specifically to tell the search engine Robot how to capture the page Content. Specific form is similar (see the black body portion):

Time Marketing - Network Marketing Professional Portal </ Title></p> <p><meta name = "robots" Content = "INDEX, FOLLOW"></p> <p><meta http-equiv = "content-type" content = "text / html; charSet = GB2312"></p> <p><meta name = "keywords" content = "Marketing ..."></p> <p><meta name = "description" content = "Time Marketing Network is ..."></p> <p><link rel = "stylesheet" href = "/ public / css.css" type = "text / css"></p> <p></ hEAD></p> <p><body></p> <p>...</p> <p></ body></p> <p></ html></p> <p>2, Robots Meta labeling:</p> <p>There is no case in the Robots Meta tag, name = "robots" means all search engines, which can be written to Name = "baiduspider" for a specific search engine. The Content section has four command options: Index, NoIndex, Follow, NOFOLLOW, and Dances are separated.</p> <p>The Index instruction tells the search robot to grab the page;</p> <p>The FOLLOW instruction indicates that the search robot can continue to capture along the link on the page;</p> <p>The default value for the Robots Meta tag is index and follow, except for INKTOMI, for it, default is index, nofollow.</p> <p>In this way, there are four combinations:</p> <p><Meta name = "robots" Content = "INDEX, FOLLOW"></p> <p><Meta name = "robots" Content = "NoIndex, Follow"></p> <p><Meta name = "robots" Content = "index, nofollow"> <meta name = "robots" Content = "NoIndex, Nofollow"></p> <p>among them</p> <p><Meta name = "robots" content = "index, follow"> can be written</p> <p><Meta name = "robots" content = "all">;</p> <p><Meta name = "robots" Content = "noIndex, nofollow"></p> <p><Meta name = "robots" Content = "None"></p> <p>It should be noted that the above-mentioned Robots.txt and Robots Meta tags restricting the search engine robot's way to grab the site content is just a rule, and the cooperation of the engine robot is needed. It is not all Robots.</p> <p>At present, the vast majority of search engine robots comply with Robots.txt rules, and for the Robots Meta label, there is not much support, but it is gradually increased, such as the famous search engine Google fully supports, and Google also increases. A directive "Archive" can limit whether Google retain web snapshots. E.g:</p> <p><Meta name = "Googlebot" Content = "INDEX, FOLLOW, NOARCHIVE"></p> <p>Represents to grab the page in this site and connect along the page, but not on GoolGe's web snapshot</p></div><div class="text-center mt-3 text-grey"> 转载请注明原文地址:https://www.9cbs.com/read-109762.html</div><div class="plugin d-flex justify-content-center mt-3"></div><hr><div class="row"><div class="col-lg-12 text-muted mt-2"><i class="icon-tags mr-2"></i><span class="badge border border-secondary mr-2"><h2 class="h6 mb-0 small"><a class="text-secondary" href="tag-2.html">9cbs</a></h2></span></div></div></div></div><div class="card card-postlist border-white shadow"><div class="card-body"><div class="card-title"><div class="d-flex justify-content-between"><div><b>New Post</b>(<span class="posts">0</span>) </div><div></div></div></div><ul class="postlist list-unstyled"> </ul></div></div><div class="d-none threadlist"><input type="checkbox" name="modtid" value="109762" checked /></div></div></div></div></div><footer class="text-muted small bg-dark py-4 mt-3" id="footer"><div class="container"><div class="row"><div class="col">CopyRight © 2020 All Rights Reserved </div><div class="col text-right">Processed: <b>0.051</b>, SQL: <b>9</b></div></div></div></footer><script src="./lang/en-us/lang.js?2.2.0"></script><script src="view/js/jquery.min.js?2.2.0"></script><script src="view/js/popper.min.js?2.2.0"></script><script src="view/js/bootstrap.min.js?2.2.0"></script><script src="view/js/xiuno.js?2.2.0"></script><script src="view/js/bootstrap-plugin.js?2.2.0"></script><script src="view/js/async.min.js?2.2.0"></script><script src="view/js/form.js?2.2.0"></script><script> var debug = DEBUG = 0; var url_rewrite_on = 1; var url_path = './'; var forumarr = {"1":"Tech"}; var fid = 1; var uid = 0; var gid = 0; xn.options.water_image_url = 'view/img/water-small.png'; </script><script src="view/js/wellcms.js?2.2.0"></script><a class="scroll-to-top rounded" href="javascript:void(0);"><i class="icon-angle-up"></i></a><a class="scroll-to-bottom rounded" href="javascript:void(0);" style="display: inline;"><i class="icon-angle-down"></i></a></body></html><script> var forum_url = 'list-1.html'; var safe_token = 'U6Ajdhez80wbvJG_2FVyWQG4pokPvsAGMY61ApAunCEsPtidL4lJlfvYJ3f4FaDZtyUaqAEGr7PvatXFUH6h3dSQ_3D_3D'; var body = $('body'); body.on('submit', '#form', function() { var jthis = $(this); var jsubmit = jthis.find('#submit'); jthis.reset(); jsubmit.button('loading'); var postdata = jthis.serializeObject(); $.xpost(jthis.attr('action'), postdata, function(code, message) { if(code == 0) { location.reload(); } else { $.alert(message); jsubmit.button('reset'); } }); return false; }); function resize_image() { var jmessagelist = $('div.message'); var first_width = jmessagelist.width(); jmessagelist.each(function() { var jdiv = $(this); var maxwidth = jdiv.attr('isfirst') ? first_width : jdiv.width(); var jmessage_width = Math.min(jdiv.width(), maxwidth); jdiv.find('img, embed, iframe, video').each(function() { var jimg = $(this); var img_width = this.org_width; var img_height = this.org_height; if(!img_width) { var img_width = jimg.attr('width'); var img_height = jimg.attr('height'); this.org_width = img_width; this.org_height = img_height; } if(img_width > jmessage_width) { if(this.tagName == 'IMG') { jimg.width(jmessage_width); jimg.css('height', 'auto'); jimg.css('cursor', 'pointer'); jimg.on('click', function() { }); } else { jimg.width(jmessage_width); var height = (img_height / img_width) * jimg.width(); jimg.height(height); } } }); }); } function resize_table() { $('div.message').each(function() { var jdiv = $(this); jdiv.find('table').addClass('table').wrap('<div class="table-responsive"></div>'); }); } $(function() { resize_image(); resize_table(); $(window).on('resize', resize_image); }); var jmessage = $('#message'); jmessage.on('focus', function() {if(jmessage.t) { clearTimeout(jmessage.t); jmessage.t = null; } jmessage.css('height', '6rem'); }); jmessage.on('blur', function() {jmessage.t = setTimeout(function() { jmessage.css('height', '2.5rem');}, 1000); }); $('#nav li[data-active="fid-1"]').addClass('active'); </script>