Useful for launch checks
Catch accidental Disallow: / rules, staging leftovers, blocked service pages, and crawler-specific rules before they become expensive mysteries.
Paste a URL and test whether robots.txt allows or blocks common search crawlers. You will see the matching rule, sitemap declarations, and a plain-English explanation of what the file is doing.
Robots.txt is a crawl-control file. It tells crawlers which paths they are allowed to request. This tool checks whether a specific URL is blocked by the site's robots.txt rules for the crawler you choose.
Catch accidental Disallow: / rules, staging leftovers, blocked service pages, and crawler-specific rules before they become expensive mysteries.
This does not confirm whether Google indexed a page. It checks crawl permission signals you can inspect from outside the site.
If a URL is blocked, the report shows the exact Allow or Disallow rule that caused the result so you know what to change.
Use this alongside the schema audit tool when you need to check both crawl access and structured data.
Disallow: / after a redesign or staging launch.noindex. It is not. Robots.txt controls crawling, while noindex controls indexing instructions on crawlable pages.