Robots.txt controls crawling
If robots.txt blocks a URL, search engines may not be able to fetch the page to read its content or directives.
Paste a URL and check the signals that affect crawl and index eligibility: redirects, status codes, robots.txt access, noindex directives, and canonical tags.
Run a check to see the page's crawl and indexability signals.
A page can be live and still be invisible to search. Indexability is the technical side of the question: can a crawler reach the URL, is it allowed to crawl it, and does the page tell search engines to keep it out of the index or treat another URL as the main version?
If robots.txt blocks a URL, search engines may not be able to fetch the page to read its content or directives.
A noindex directive tells search engines not to include the page in search results, even when the page is crawlable.
A canonical tag can tell search engines that another URL should receive the indexing and ranking signals.
For the bigger picture, read how to check if Google can find your website, or use this with the robots.txt checker and sitemap validator.