txt file is then parsed and can instruct the robotic as to which web pages will not be for being crawled. Like a search engine crawler may perhaps maintain a cached duplicate of the file, it might occasionally crawl pages a webmaster won't wish to crawl. Pages ordinarily prevented from getting crawled involve login-particular pages which include pr