Robots.txt

A Definition to a Common Term Related to Web Analytics

Back To Glossary

Term: "Robots.txt"

Definition

The robots.txt file is a standard used by websites to communicate with web crawlers and other web robots. The file is placed in the root directory of the website and specifies to visiting bots which areas of the site should not be processed or scanned. It is used primarily to avoid overloading the site with requests; it is not a mechanism for keeping a web page out of Google search results or to keep web crawlers from accessing sensitive information.

Each line of a robots.txt file specifies a directive, typically indicating whether a user-agent (a specific web crawler) is allowed or disallowed from accessing parts of the site. For example, a directive might tell all crawlers to stay away from a certain folder on the server, or it might tell a specific crawler not to access a specific file. While most well-behaved crawlers respect robots.txt, it is not a foolproof way to prevent crawling, as it relies on the voluntary cooperation of the crawler. Furthermore, since the file is publicly accessible, it can inadvertently reveal the existence of hidden or private sections of the site to potential attackers.

Try Canny Armadillo Today With a 30-Day Money Back Guarantee

Get Started
Chat