logo

Crawler rule testing on Tolyo.app

Test Robots.txt Rules

Check if a URL is blocked by robots.txt, test allow vs disallow logic, and see which rule matches for Googlebot or another crawler.

Use this rule tester to debug indexing issues, blocked paths, and crawler access before you edit your live robots.txt file.

Robots.txt content

Paste existing rules or fetch a live robots.txt file to test specific paths.

Test one path or a bulk list

Check the homepage, admin paths, search URLs, product pages, or a custom bulk list without leaving the same screen.

Switch user-agents instantly

Test rules for Googlebot, Bingbot, AdsBot-Google, Googlebot-Image, or a custom crawler string.

See the winning rule clearly

Get the matched group, matched directive, highlighted lines, and a plain-English explanation of the result.

What a robots.txt rule tester helps you check

A robots.txt rule tester is built for one very specific job: checking whether a path is allowed or blocked for a selected crawler. Instead of reading raw directives and guessing what wins, you can test the exact URL or path that matters and review the rule that decided the outcome.

That is useful for questions like whether robots.txt is blocking your website, why a page is not being crawled, or whether a section that should be allowed is still getting blocked by a broader disallow. The tool is designed for people who want to test robots.txt online without running a full crawler or waiting for a separate audit platform.

How to test robots.txt rules

Fetch a live robots.txt file or paste the current version into the editor. Then enter the path you want to inspect, choose the crawler you care about, and run the test. Tolyo.app will tell you whether the path is allowed or blocked, which user-agent group matched, and which directive won.

This mirrors the most common workflow behind searches like how to test robots.txt rules, test robots.txt online, or test robots.txt rules for a specific URL. The goal is not just to show a result, but to explain why the result happened.

  1. 1.Fetch or paste the robots.txt content you want to test.
  2. 2.Enter a full URL or just the path you want to inspect.
  3. 3.Select Googlebot, another crawler, or enter a custom user-agent.
  4. 4.Run the rule test and review the matched group, rule, and explanation.

Check if a URL is blocked by robots.txt

One of the most common technical SEO tasks is checking if a single URL is blocked by robots.txt. This can be surprisingly hard to answer by reading the file manually when there are several crawler groups, broad disallow patterns, or specific allow overrides inside a blocked section.

That is why the page focuses on the allow vs disallow test itself. If a robots.txt disallow does not seem to be working, there may be a more specific allow that reopens a path. If a page is unexpectedly blocked, a broad group may be catching more than you intended. The rule tester makes that logic visible.

Test robots.txt for Googlebot and other crawlers

Different bots can match different user-agent groups, which is why a robots.txt tester for Google-focused debugging still needs user-agent switching. A file that looks fine under `User-agent: *` may behave differently for Googlebot, Bingbot, or a more specific crawler variant.

Use this page to test robots.txt for Googlebot, check crawler-specific rules, or compare how the same path behaves under different groups. That helps when pages are not indexed, images are blocked unexpectedly, or a path works for one bot but not for another.

Why pages get blocked unexpectedly

Unexpected blocking usually comes from broad disallow rules, duplicate groups, unclear wildcard patterns, or asset directories that were blocked during an older deployment and never revisited. In real projects, this often shows up as pages not indexed because of a robots.txt issue, missing CSS or JavaScript in rendering workflows, or important landing pages that never seem to get crawled.

The tester helps debug those cases by highlighting the lines involved and explaining which rule won. That is much faster than guessing from memory or manually tracing the same rules repeatedly across several paths.

Does this tool fetch my website data?

When you paste robots.txt content manually, the testing stays in the browser. If you use the live fetch feature, Tolyo.app requests the robots.txt file from the backend so it can safely normalize the domain, fetch the right URL, and return the file for analysis.

The backend fetch is there for accuracy and convenience, not for storing a site copy. It pulls the current robots.txt temporarily so you can test and debug it more easily.

Common use cases

  • Check if important pages are blocked from Google before an indexing review.
  • Debug why a path under /admin, /search, or /blog behaves differently than expected.
  • Compare how Googlebot and Bingbot match the same rules.
  • Test a batch of URLs after a robots.txt change or site migration.
  • Confirm that allow overrides inside blocked sections are working correctly.

Related Tools

Frequently asked questions

How do I test robots.txt rules?

Paste or fetch your robots.txt file, enter the URL or path you want to inspect, choose a user-agent, and run the test to see whether the path is allowed or blocked.

How do I know if robots.txt is blocking my page?

Use the tester to check that specific page. The result shows the matched user-agent group, the winning rule, and a plain-English explanation.

Can I test robots.txt for Googlebot?

Yes. Select Googlebot from the user-agent list and test any full URL or path you want to inspect.

Why does a disallow rule sometimes not seem to work?

A more specific allow rule can override a broader disallow. The tester makes that precedence visible so you can see why the result happened.

Can I test more than one URL at once?

Yes. The bulk tester lets you paste multiple paths or URLs and review allowed or blocked results in one pass.