Sitemaps should use the X-Robots-Tag HTTP header
|Reported by:||mlissner||Owned by:||mlissner|
|Severity:||Normal||Keywords:||decorator, sitemap.xml, robots|
|Has patch:||yes||Needs documentation:||no|
|Needs tests:||no||Patch needs improvement:||no|
Major search engines currently support three ways that they can be blocked:
- On an HTML page you can provide a meta robots tag that says nocrawl or noindex (or both)
- You can provide an HTTP X-Robots tag that says nocrawl or noindex (or both)
- You can list a file in your robots.txt file.
The distinction between nocrawl, robots and noindex is subtle but important.
*Nocrawl* means that crawlers should stay out -- not even visit the page. Robots.txt and the nocrawl tags accomplish this. Contrary to the *extremely* common belief, placing a resource in robots.txt or putting a nocrawl meta tag on it will *not* prevent it from showing up in search results. The reason is that if Google or Bing knows about a page, that page will show up in search results until it is looked into further. Later, when that happens, the crawler will detect if it's blocked by robots.txt or by norobots. If so, it won't crawl the page (as requested), *but* the page will continue to be in search results unless there's a *noindex* flag.
Here's a short video from Matt Cutts (Google employee) explaining this oddity: http://www.youtube.com/watch?v=KBdEwpRQRD0
And Microsoft has it documented here: http://www.bing.com/community/site_blogs/b/webmaster/archive/2009/08/21/prevent-a-bot-from-getting-lost-in-space-sem-101.aspx
*Noindex* means to please crawl the page, but to not include it in the index, and we should be using this on our sitemaps. Since we don't currently use the noindex HTTP headers, sitemaps made with Django will appear in search results even though they're pretty much useless. You can see see this with clever searches on google for things like [ sitemap.xml site:django-site.com ].
This oddity causes an additional problem because the only way to prevent a page from appearing in Bing or Google is currently:
- to include it in your sitemap so that it will be crawled as soon as possible; and
- to place a noindex tag on the page or resource itself.
The site I run has strict requirements when it comes to this fun topic, and there's a lot of people that believe robots.txt works, so I've written up my finding on this: http://michaeljaylissner.com/blog/respecting-privacy-while-providing-hundreds-of-thousands-of-public-documents
I'll write up a patch (my first) to fix this, and will submit it shortly.
Change History (11)
comment:1 Changed 22 months ago by mlissner
- Needs documentation unset
- Needs tests unset
- Owner changed from nobody to mlissner
- Patch needs improvement unset
- Status changed from new to assigned
Changed 22 months ago by mlissner
comment:4 Changed 21 months ago by andrewgodwin
- Triage Stage changed from Unreviewed to Ready for checkin
comment:6 Changed 18 months ago by agestart@…
- Has patch unset
- Keywords decorator, sitemap.xml, robots added
comment:8 Changed 12 months ago by aaugustin
- Component changed from Core (Other) to contrib.sitemaps