Skip to main content

Posts

Showing posts from January 16, 2012

Blocking or removing pages using a robots.txt file

Before we go further let us identify what is a robots.txt file? A robots.txt file is a file which restricts access a site. Most of the search engines have robots that crawl the web. These bots are automated. They access pages of a site and check the contents of robots.txt file to verify whether this file prevents access to certain pages. You need a robots.txt file only if your site includes content that you don't want search engines to index. In case if one wants search engines to index everything on their web site then no one needs a robots.txt file. For using a robots.txt file one needs to access the root of the domain. You can check more information on http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449