Crawl Delay is a directive in the robots.txt file that instructs web crawlers to pause for a specified number of seconds between successive requests to a website's server. It is used by webmasters who want to limit the rate at which bots access their site to prevent server overload or resource exhaustion caused by rapid, high-volume crawling.
How Crawl Delay Works
The Crawl-delay directive is written in robots.txt as follows: Crawl-delay: 10 (meaning wait 10 seconds between requests). It can be applied globally or to specific user agents. However, there is an important caveat: Googlebot does not support the Crawl-delay directive in robots.txt. Google provides its own crawl rate adjustment tool within Google Search Console, where webmasters can request slower crawling. Other crawlers like Bingbot do respect the Crawl-delay directive. For Googlebot rate limiting, the Search Console setting is the only officially supported method.
When to Use Crawl Delay
Crawl delay is appropriate in specific circumstances:
- Shared hosting environments where bot traffic consumes significant server resources
- Limiting third-party SEO tool crawlers (Ahrefs, Semrush, etc.) that hammer servers
- Older servers with limited capacity to handle concurrent requests
- During site migrations when server load is already elevated
Why It Matters for SEO
Setting an appropriate crawl delay protects server stability without unnecessarily limiting Googlebot's ability to discover and index new content. For large websites, overly aggressive crawling can slow page load times for real users, indirectly harming user experience signals. However, setting a crawl delay that is too long can slow Google's discovery and indexing of new or updated content. Balancing server health with crawl accessibility is a key consideration in technical SEO.