How to reliably control external crawlers and reduce crawl load — practical guide with nginx rate-limiting Direct answer: Use robots.txt for cooperative guidance, but rely on server-side controls (nginx) for immediate, reliable protection. This article explains why robots.txt sometimes doesn’t work, how to diagnose the problem, and how to implement a safe, production-ready nginx-based, per-user-agent rate limiting strategy that preserves access while protecting your servers. What this article answers Central question: How can I control aggressive crawlers (for example AhrefsBot) when robots.txt changes don’t reduce crawl traffic, and what practical nginx configuration will reliably slow them down without disrupting normal …