This free audit measures crawl, indexation, AI bot access, and discovery signals on this URL. Read the results below as a technical snapshot, not a full site verdict.
Content quality and authority analysis are available in Pro.
6 issues found on this page
warning Top findings in this free audit
Consider allowing ClaudeBot if you want to appear in AI search results: `User-agent: ClaudeBot` / `Allow: /`
Add `lang` attribute to your `<html>` tag: `<html lang="en">`. This helps AI engines identify content language and se...
Add `<link rel="alternate" type="application/rss+xml" href="/feed.xml">` to your HTML `<head>`. RSS feeds help Google...
Add Organization schema: `{"@context": "https://schema.org", "@type": "Organization", "@id": "#organization", "name":...
Don't skip heading levels (e.g., H2 -> H4). Use H3 under H2, H4 under H3. Each heading should be one level deeper tha...
lightbulb Opportunities to improve 7
Your sitemap has 6 URLs but no image entries. Adding `<image:image>` elements helps AI crawlers discover and index yo...
Not a ranking signal. Google uses sitemap <lastmod> for crawl scheduling and recommends ETag over Last-Modified for H...
llms.txt should have: `# H1` title, description paragraph, `## sections`, and `[links](url)` to key content.
Add `<meta property="og:image:width" content="1200">` and `<meta property="og:image:height" content="630">`. Optimal ...
Add WebSite schema: `{"@context": "https://schema.org", "@type": "WebSite", "name": "Site Name", "url": "https://exam...
2 more in detailed results below
Measured summaries
These cards show free technical signals only.
Technical SEO shows the technical baseline for crawl, indexation, and page interpretation. AI Visibility shows whether AI bots and AI search systems can access and discover this URL. AI Readiness is a partial snapshot built only from the free signals measured in this audit, not the deeper content and authority analysis available in Pro.How to read ›
AI Visibility
Partially Blocked
Some AI search bots are blocked from accessing your content
Key bot access snapshot
Measures whether AI bots can access and discover this URL. Content quality is scored separately in AI Readiness.
Key discovery signals
info Your technical SEO is solid, but blocked AI bots limit your visibility in AI-powered search results.
Technical SEO
86%
+5Solid
Partial snapshot from free technical checks only
Snapshot of the measured technical baseline for this URL.
Crawl Efficiency
Crawl issues need attention
2 issues across 14 crawl and discovery checks
Longer bars mean this area is in better shape.
lightbulb Discovery signals needs the most attention in this audit.
Answer Quality
Minor answer quality issues found
1 answer quality issues across 1 content checks
Longer bars mean this area is in better shape.
lightbulb Structure is the biggest answer quality risk at 3% penalty.
AI Readiness
Partial Snapshot
Partial snapshot from free technical checks only
Partial snapshot from the free signals this audit could score. It does not include the deeper content and authority analysis available in Pro.
Measured from 32 of 49 readiness checks in this audit
Save and rerun this audit
Create a free account to keep this audit, rerun up to 20 URLs/month, and track whether your fixes improved this page.
Technical
40Consider allowing ClaudeBot if you want to appear in AI search results: User-agent: ClaudeBot / Allow: /
eco ClaudeBot crawls for Anthropic's Claude. Blocking it prevents Claude from referencing your content.
- Bot
- ClaudeBot
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
Add lang attribute to your <html> tag: <html lang="en">. This helps AI engines identify content language and serve it to the correct audience.
eco The lang attribute on <html> helps AI engines identify content language and serve it to the correct geographic audience.
Add <link rel="alternate" type="application/rss+xml" href="/feed.xml"> to your HTML <head>. RSS feeds help Google and Perplexity discover content updates faster.
eco RSS feeds help AI engines discover new content quickly — Perplexity and others poll feeds for fresh pages.
Your sitemap has 6 URLs but no image entries. Adding <image:image> elements helps AI crawlers discover and index your visual content.
eco Image sitemap entries help AI crawlers discover and index images for visual search results.
Not a ranking signal. Google uses sitemap <lastmod> for crawl scheduling and recommends ETag over Last-Modified for HTTP caching. This header can help CDNs validate cached content but has negligible SEO impact.
eco Not a ranking signal. Google uses sitemap lastmod for crawl scheduling and recommends ETag over Last-Modified. Main value is CDN cache validation.
llms.txt should have: # H1 title, description paragraph, ## sections, and [links](url) to key content.
eco A malformed llms.txt file is worse than none — AI parsers will ignore it entirely.
- Issues
- missing descriptive text
- no markdown links
Add <meta property="og:image:width" content="1200"> and <meta property="og:image:height" content="630">. Optimal OG image is 1200x630px for social sharing and AI citation cards.
eco Declared OG image dimensions (1200x630) ensure optimal display in social sharing and AI citation cards.
- OG image
- https://hybridranking.com/og.png
If your site has content in multiple languages, add hreflang tags:<link rel="alternate" hreflang="pl" href="https://example.com/pl/page"><link rel="alternate" hreflang="en" href="https://example.com/en/page"><link rel="alternate" hreflang="x-default" href="https://example.com/page">
Multilingual sites see significantly higher AI search visibility.
eco This training crawler is treated as optional in this audit. Search-facing AI access is measured separately.
- Bot
- GPTBot
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
eco Googlebot indexes your site for Google Search and AI Overviews. Blocking it removes you from Google entirely.
- Bot
- Googlebot
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco Bingbot indexes for Bing and Microsoft Copilot. Blocking it removes you from Bing search and Copilot answers.
- Bot
- Bingbot
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco This training crawler is treated as optional in this audit. Search-facing AI access is measured separately.
- Bot
- Google-Extended
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
eco PerplexityBot indexes pages for Perplexity AI search. Blocking it means your content won't appear in Perplexity answers.
- Bot
- PerplexityBot
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco llms.txt is an emerging standard that tells AI models what your site is about and which pages matter most.
- Content
- # Hybrid Ranking AI Search Defense > Hybrid Ranking is an SEO audit tool that checks your pages for both traditional search engines and AI search (Google AI Overviews, ChatGPT, Perplexity). ## What it does - Runs 30+ technical SEO checks per page in under 30 seconds - Detects AI page type (art...
- Value
- Hybrid Ranking — AI Search Defense & Visibility Platform
- Content
- # As a condition of accessing this website, you agree to abide by the following # content signals: # (a) If a Content-Signal = yes, you may collect content for the corresponding # use. # (b) If a Content-Signal = no, you may not collect content for the # corresponding use. # (c) If ...
- Content
- <?xml version="1.0" encoding="UTF-8"?><urlset xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd" xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:image="http://www....
eco OAI-SearchBot powers ChatGPT's search feature. Blocking it removes you from ChatGPT search results entirely.
- Bot
- OAI-SearchBot
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco ChatGPT-User is the bot that fetches pages when users ask ChatGPT to browse. Blocking it prevents live page reads.
- Bot
- ChatGPT-User
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
- URL
- https://hybridranking.com/
eco Conflicting robots.txt rules confuse AI crawlers — they may default to blocking, silently hiding your content.
eco Favicons appear in browser tabs and AI search result cards, signaling site legitimacy.
- URL
- /favicon.ico
- Value
- Find what's killing your rankings. Free SEO audit tool with AI-powered analysis.
eco Mixed HTTP/HTTPS content triggers browser security warnings, increasing bounce rates and degrading crawl trust.
eco Resource hints (preconnect, dns-prefetch, preload) reduce page load time, improving Core Web Vitals and crawl efficiency.
- Sitemap directives
- https://hybridranking.com/sitemap.xml.gz
- Checked headers
- x-content-type-options
- x-frame-options
- strict-transport-security
eco A non-self-referencing canonical tells AI engines this page is a duplicate, redirecting citation credit to another URL.
- Page url
- https://hybridranking.com
- Canonical
- https://hybridranking.com/
- Tags found
- main
- nav
- header
- footer
- Sample URLs
- https://hybridranking.com
- https://hybridranking.com/pricing
- https://hybridranking.com/privacy
- https://hybridranking.com/terms
- https://hybridranking.com/registration/new
eco Sitemap lastmod dates tell AI crawlers which pages changed recently and need re-indexing.
eco The viewport meta tag is required for mobile-first indexing. Without it, AI crawlers may treat your page as non-mobile-friendly.
- Content
- width=device-width,initial-scale=1
eco This training crawler is treated as optional in this audit. Search-facing AI access is measured separately.
- Bot
- Bytespider
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
eco This training crawler is treated as optional in this audit. Search-facing AI access is measured separately.
- Bot
- CCBot
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
Add Organization schema: {"@context": "https://schema.org", "@type": "Organization", "@id": "#organization", "name": "Your Company", "url": "https://example.com", "logo": {"@type": "ImageObject", "url": "https://example.com/logo.png"}, "sameAs": ["https://twitter.com/you", "https://linkedin.com/company/you"]}
- Schema types found
- FAQPage
- Question
Add "inLanguage": "pl" (or "en", "de", etc.) to your Article/WebPage schema. Use ISO 639-1 two-letter codes. For regional variants use BCP 47 format: "inLanguage": "pt-BR". Helps AI models and search engines correctly identify content language.
eco The inLanguage schema field helps AI serve your content to the right language audience.
Add WebSite schema: {"@context": "https://schema.org", "@type": "WebSite", "name": "Site Name", "url": "https://example.com", "potentialAction": {"@type": "SearchAction", "target": {"@type": "EntryPoint", "urlTemplate": "https://example.com/search?q={search_term_string}"}, "query-input": "required name=search_term_string"}}
- Schema types found
- FAQPage
- Question
eco JSON-LD structured data helps AI engines understand your page content, entities, and relationships for richer citations.
- Types found
- FAQPage
- Fields present
- mainEntity
- Schema types found
- FAQPage
- Question
Don't skip heading levels (e.g., H2 -> H4). Use H3 under H2, H4 under H3. Each heading should be one level deeper than its parent section.
- Hierarchy skips
- Three steps to see what AI ... (H2) -> Paste URL (H4)
- Frequently asked questions (H2) -> What is hybrid ranking? (H4)
- Headings
- H1: Google ranks you. But does ChatGPT re...
- H2: See a real audit — no signup needed
- H2: The tools you rely on can't see this
- H2: Diagnoses why AI is stealing your tra...
- H3: See how ChatGPT reads your pages
- H3: Find out if AI recommends you
- H3: Check the signals that drive AI citat...
- H3: Detect content invisible to AI bots
- H3: Verify your Google foundation is solid
- H3: Get instant fixes you can ship in 5 m...
- ... and 10 more
- Headings
- Google ranks you. But does ChatGPT recommend you?
eco Homepage links not covered by sitemap may miss crawl prioritization. Important content pages should be in the sitemap for optimal discovery.
- Examples
- /
- /audits/uFPEqNpTMuGjtMpUt18HLV1v
eco Content starting near the top of the page is more likely to be indexed, extracted, and cited by AI engines.
- Elements before content
- 1 image(s)
- section
- section
- section
- section
- Links found
- /
- /session/new
- /registration/new
- /session/new
- /registration/new
- /registration/new
- /privacy
- /audits/uFPEqNpTMuGjtMpUt18HLV1v
- /registration/new?plan=pro
- /#new-audit
AI Bot Access
11How AI crawlers and search bots access your site
Consider allowing ClaudeBot if you want to appear in AI search results: User-agent: ClaudeBot / Allow: /
eco ClaudeBot crawls for Anthropic's Claude. Blocking it prevents Claude from referencing your content.
- Bot
- ClaudeBot
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
eco This training crawler is treated as optional in this audit. Search-facing AI access is measured separately.
- Bot
- GPTBot
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
eco Googlebot indexes your site for Google Search and AI Overviews. Blocking it removes you from Google entirely.
- Bot
- Googlebot
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco Bingbot indexes for Bing and Microsoft Copilot. Blocking it removes you from Bing search and Copilot answers.
- Bot
- Bingbot
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco This training crawler is treated as optional in this audit. Search-facing AI access is measured separately.
- Bot
- Google-Extended
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
eco PerplexityBot indexes pages for Perplexity AI search. Blocking it means your content won't appear in Perplexity answers.
- Bot
- PerplexityBot
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco OAI-SearchBot powers ChatGPT's search feature. Blocking it removes you from ChatGPT search results entirely.
- Bot
- OAI-SearchBot
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco ChatGPT-User is the bot that fetches pages when users ask ChatGPT to browse. Blocking it prevents live page reads.
- Bot
- ChatGPT-User
- Robots.txt rules
- {"path" => "/", "type" => "allow"}
- {"path" => "/", "type" => "allow"}
- {"path" => "/admin", "type" => "disallow"}
- {"path" => "/webhooks", "type" => "disallow"}
eco Conflicting robots.txt rules confuse AI crawlers — they may default to blocking, silently hiding your content.
eco This training crawler is treated as optional in this audit. Search-facing AI access is measured separately.
- Bot
- Bytespider
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
eco This training crawler is treated as optional in this audit. Search-facing AI access is measured separately.
- Bot
- CCBot
- Robots.txt rules
- {"path" => "/", "type" => "disallow"}
AI Discovery
12How AI engines find, index, and revisit your content
Add <link rel="alternate" type="application/rss+xml" href="/feed.xml"> to your HTML <head>. RSS feeds help Google and Perplexity discover content updates faster.
eco RSS feeds help AI engines discover new content quickly — Perplexity and others poll feeds for fresh pages.
Your sitemap has 6 URLs but no image entries. Adding <image:image> elements helps AI crawlers discover and index your visual content.
eco Image sitemap entries help AI crawlers discover and index images for visual search results.
Not a ranking signal. Google uses sitemap <lastmod> for crawl scheduling and recommends ETag over Last-Modified for HTTP caching. This header can help CDNs validate cached content but has negligible SEO impact.
eco Not a ranking signal. Google uses sitemap lastmod for crawl scheduling and recommends ETag over Last-Modified. Main value is CDN cache validation.
llms.txt should have: # H1 title, description paragraph, ## sections, and [links](url) to key content.
eco A malformed llms.txt file is worse than none — AI parsers will ignore it entirely.
- Issues
- missing descriptive text
- no markdown links
Add <meta property="og:image:width" content="1200"> and <meta property="og:image:height" content="630">. Optimal OG image is 1200x630px for social sharing and AI citation cards.
eco Declared OG image dimensions (1200x630) ensure optimal display in social sharing and AI citation cards.
- OG image
- https://hybridranking.com/og.png
eco llms.txt is an emerging standard that tells AI models what your site is about and which pages matter most.
- Content
- # Hybrid Ranking AI Search Defense > Hybrid Ranking is an SEO audit tool that checks your pages for both traditional search engines and AI search (Google AI Overviews, ChatGPT, Perplexity). ## What it does - Runs 30+ technical SEO checks per page in under 30 seconds - Detects AI page type (art...
eco Mixed HTTP/HTTPS content triggers browser security warnings, increasing bounce rates and degrading crawl trust.
eco Resource hints (preconnect, dns-prefetch, preload) reduce page load time, improving Core Web Vitals and crawl efficiency.
eco A non-self-referencing canonical tells AI engines this page is a duplicate, redirecting citation credit to another URL.
- Page url
- https://hybridranking.com
- Canonical
- https://hybridranking.com/
eco Sitemap lastmod dates tell AI crawlers which pages changed recently and need re-indexing.
eco The viewport meta tag is required for mobile-first indexing. Without it, AI crawlers may treat your page as non-mobile-friendly.
- Content
- width=device-width,initial-scale=1
eco Homepage links not covered by sitemap may miss crawl prioritization. Important content pages should be in the sitemap for optimal discovery.
- Examples
- /
- /audits/uFPEqNpTMuGjtMpUt18HLV1v
Content for AI
5 free Available in ProHow well AI can parse, extract, and cite your content
Add Organization schema: {"@context": "https://schema.org", "@type": "Organization", "@id": "#organization", "name": "Your Company", "url": "https://example.com", "logo": {"@type": "ImageObject", "url": "https://example.com/logo.png"}, "sameAs": ["https://twitter.com/you", "https://linkedin.com/company/you"]}
- Schema types found
- FAQPage
- Question
Add WebSite schema: {"@context": "https://schema.org", "@type": "WebSite", "name": "Site Name", "url": "https://example.com", "potentialAction": {"@type": "SearchAction", "target": {"@type": "EntryPoint", "urlTemplate": "https://example.com/search?q={search_term_string}"}, "query-input": "required name=search_term_string"}}
- Schema types found
- FAQPage
- Question
eco JSON-LD structured data helps AI engines understand your page content, entities, and relationships for richer citations.
- Types found
- FAQPage
- Schema types found
- FAQPage
- Question
eco Content starting near the top of the page is more likely to be indexed, extracted, and cited by AI engines.
- Elements before content
- 1 image(s)
- section
- section
- section
- section
Deeper analysis is available in Pro
The findings above come from the free technical checks. Pro adds AI-powered deep analysis to this section — common issues found on most sites.
Trust & Authority
4 free Available in ProE-E-A-T signals that AI models use to assess credibility
Add lang attribute to your <html> tag: <html lang="en">. This helps AI engines identify content language and serve it to the correct audience.
eco The lang attribute on <html> helps AI engines identify content language and serve it to the correct geographic audience.
Add "inLanguage": "pl" (or "en", "de", etc.) to your Article/WebPage schema. Use ISO 639-1 two-letter codes. For regional variants use BCP 47 format: "inLanguage": "pt-BR". Helps AI models and search engines correctly identify content language.
eco The inLanguage schema field helps AI serve your content to the right language audience.
eco Favicons appear in browser tabs and AI search result cards, signaling site legitimacy.
- URL
- /favicon.ico
Deeper analysis is available in Pro
The findings above come from the free technical checks. Pro adds AI-powered deep analysis to this section — common issues found on most sites.
Estimated AI Visibility Loss
Estimate from the measured AI access and discovery issues on this URL
Estimate based on the AI access and discovery issues measured in this free audit. Treat it as deeper interpretation, not a separate site verdict.
-40%
Estimated AI Visibility Loss
3 issues found
Across 32 of 61 checks analyzed
This estimate reflects how much the measured access and discovery issues above may be reducing AI visibility.
Sign up free to see your top fixes and track your score
person_add Create free accountBased on AI search research info
3 research sources:
- AI search traffic share (12.2%) — Conductor benchmarks, 2026
- Citation probability model — Princeton GEO study, 2024
- Penalty calibration (129k domains) — SE Ranking, 2025
Individual results may vary by industry and content type.