Robots.txt Just Isn’t Working For Me

Jan 01, 2009 · 1 min read
Share this

Dear Search Engines,

I’ve worked for huge companies for many years. Each have their own unique issues. One issue they all have in common is you. You crawl our sites and expect us to know better and be able to react to that in real time. You expect us to know what we don’t want crawled and you expect us to be able to conjure up a robots.txt file put rel=nofollow Meta noindexes or whatever to satisfy that need.

Another issue that all the companies I’ve ever worked for have is slow time to develop and release anything to the website. If I know there is an issue a) I have to explain it b) get buyoff from engineering/business units/execs c) get engineering to build the document d) merge the code e) QA it f) wait until the next build/release and then poof, just like magic we may or may not have fixed the issue depending on if QA and engineering did their job right. If not, the cycle continues.

Here’s a crazy thought. Why don’t you let us upload our robots.txt to you guys? Make us put some crazy hash in a file somewhere to prove we are who we say we are, but let us tell you immediately what no not index, what to not follow or otherwise waste our bandwidth doing. I ran into a situation today where any reasonable person could have immediately told you how to fix the issue, yet it may take weeks or months to fix the problem instead of one guy in a few minutes uploading one file to tell you not to do XZY. You allowed us to do things like upload site-maps, why not let us tell you what we DON’T want you indexing? I know, where do I come up with all this crazy talk?

Tell you what, search engines, I’m going to let you think on it while I grind my development resources to a halt trying to keep you off of certain areas of my company’s site. Let the patent wars begin.


Best VPN
Join Newsletter
Get the latest post right in your inbox.