Most pages on this topic repeat the same stats and fixes. This one walks you through the test on your own firm, shows you the exact sameAs array that earned me a ChatGPT citation this week, and tells you which cause to fix first based on your market tier.
Your firm fails on one of four fronts: weak entity definition (inconsistent NAP across directories and bar registry), thin content architecture, missing attorney E-E-A-T signals, or an absent off-site footprint. The #1 cause, affecting 73% of firms, is entity fragmentation; ChatGPT can’t confidently identify who you are across the web, so it doesn’t cite you even when you rank #1 on Google.
Test your own firm first · 60 seconds
The 3-query diagnostic
Run these in ChatGPT, Perplexity, and Google AI Overview. The pattern of which platforms your firm appears on tells you which cause to fix first.
Query 1 · Head term
best [practice] lawyer [your city]
Miss here → entity gap, fix #1
Query 2 · Sub-topic
what happens if I refuse a breathalyzer in FL
Miss here → thin content, fix #2
Query 3 · Local
who should I hire for a [case] case in [your zip]
Miss here → off-site gap, fix #4
Cross-check: In Google AIO but not ChatGPT? Entity graph gap (cause #1). In ChatGPT but not AIO? Freshness gap. Missing everywhere? Start with #1.
“Google ranks and ChatGPT citations are now two different systems. Ahrefs shows the overlap between them collapsed from 76% to 38% in 7 months.”
Jorge Argota · April 2026
From the paralegal seatWhat audit data misses.
3 things I learned about AI visibility from inside the firm
01
Claimants cross-check ChatGPT against Google Reviews before calling.
ChatGPT gives them 3 names; they Google each one and pick the one with a recent 5-star review mentioning their case type. If your reviews don’t mention case types, you still lose the call.
02
Intake response time is a ranking factor ChatGPT can’t see — but claimants remember it.
40-minute intake response → voicemail → 1-star review → ChatGPT signal against you next quarter. Bad intake compounds against AI visibility.
03
Florida Bar Rule 4-7.2 bans some of the tactics generic GEO guides recommend.
Comparative superiority and results-implying language drives AI citation everywhere else. Florida attorneys can’t publish that. Narrower window, different rules.
The 4 root causes, ranked by frequency
Frequencies cross-validated against TendorAI’s 8,625-solicitor audit and Splat’s 120-query AI study. Fix them in this order; fixing #4 before #1 is wasted work.
Why this is #1: ChatGPT scores every business on entity confidence before evaluating content quality. Inconsistent firm name, attorney name, phone, or address across the web drops confidence below the citation threshold — so you’re invisible regardless of how good your site is.
What it looks like
NAP mismatch: “Smith Law” / “Smith Law Firm, P.A.” / “John Smith, Esq.” across 3 platforms
Attorney uses “Johnny” on site but “John A.” in Florida Bar registry
Missing sameAs schema linking profiles
GBP phone differs from website footer
What to fix
NAP audit across 15+ directories (down to punctuation)
Swap for: LinkedIn, Florida Bar profile, Avvo, Justia, Martindale, GBP, Super Lawyers, YouTube.
Why this is #2: AI Overview uses query fan-out — one search triggers 8-12 sub-queries, each pulling from a different content pool. A single practice area page misses every sub-query citation slot. Firms under 300 words per page average 18/100 AI visibility; firms with 500+ words and FAQ coverage average 41/100.
One query → 10 sub-queries behind it
· What is Florida implied consent law?
· Florida DUI penalties first offense?
· Can you refuse a breathalyzer in FL?
· Average cost DUI lawyer Tampa?
· What happens at a DUI arraignment?
· Florida DUI statute of limitations?
· Tampa DUI attorney fees?
· Do I need a lawyer for a first DUI?
· Hillsborough County DUI court process?
· Florida hardship license after DUI?
What it looks like
One “criminal defense” page, no DUI / drugs / assault sub-pages
Blog posts not linked to any pillar
Generic FAQ (“Why do I need a lawyer?”) instead of intake questions
No coverage of the “what / how / can I / do I need” intent spectrum
Why this is #3: Legal content is YMYL — highest E-E-A-T scrutiny of any vertical. Harvard Journal of Law and Technology (Jan 2026) reviewed 50 US law firm sites and flagged three patterns keeping them out of AI Overviews: reassurance-copy openings, no FAQ headings, and missing attorney bylines with statute citations.
“A byline with a real attorney’s name, proper Person schema, and a link to a verified bio is a different trust signal than ‘by the firm.’ Pages with explicit authorship, quoted statutes, and case citations get cited at markedly higher rates in YMYL verticals.”
What it looks like
“Staff Writer” or “The Firm” attribution
Attorney bio lists name + practice area only, no bar admissions
Named attorney byline + JD + Florida Bar number on every page
Person schema with hasCredential, memberOf (Florida Bar)
Statute citation for every substantive legal claim
Visible “Last Updated” date + quarterly review + dateModified in schema
Freshness matters: 85% of AI Overview citations were updated in the last 2 years; 44% from 2025 alone. Content older than 12-18 months gets actively deprioritized.
Why this is #4 (not higher): Off-site matters more in specific contexts. Tier 1 markets (NYC, Chicago, LA) — firm websites are invisible as direct sources, Chambers and BigLaw directories are the baseline. Tier 4 markets (Tampa, Orlando, Jacksonville, Brandon) — your website IS the primary AI citation source because directory coverage is thin.
ChatGPT recommended Morgan & Morgan, Lipcon & Lipcon, and Panter Law for “best personal injury lawyer Miami” — none ranked #1 on Google. ChatGPT pulled from Yelp reviews, Miami Herald mentions, YouTube, and Reddit. Not domain authority. Not backlinks. Actual presence where real people discuss lawyers.
What it looks like
No YouTube channel (now the #1 cited domain in AI Overviews at 5.6%)
No Reddit mentions (citations surging due to OpenAI partnership)
Avvo / Justia / Martindale profiles unclaimed or incomplete
No local editorial (Tampa Bay Times, Orlando Sentinel, Miami Herald)
What to fix
YouTube channel with attorney explainers + clean transcripts
Review velocity: 5+ new/month with case-type mentions
“Fix them in order. Fixing #4 before #1 means your directory citations point to fragmented entity data — AI still can’t identify your firm.”
Your market tier changes the playbook
A Tampa firm and a Manhattan firm need different strategies. Most guides ignore this and prescribe Chambers submissions to everyone — wasted budget in Brandon, table stakes in Midtown.
Tier
Cities
Driver
Opportunity
Tier 1 LOCKED
NYC, Chicago, LA
Chambers, BigLaw directories
Directory submissions only; website won’t move the needle
Tier 2 COMPETITIVE
Miami, Houston, Dallas, Phoenix
National dirs + local editorial
Miami Herald, Houston Chronicle + review density
Tier 3 MID-MARKET
Philadelphia, Atlanta, Denver
Super Lawyers + local press
Attorney at Law Magazine, biz journals
Tier 4 OPEN
Tampa, Orlando, Jacksonville, Brandon
Website + GBP + local citations
Your site = PRIMARY AI source · first-mover wins
If you’re in Tampa, Orlando, Jacksonville, or Brandon: your market is Tier 4. The window is open right now because most firms in your market haven’t done the entity work. Whoever executes first gets the citation slot.
Florida Bar Rule 4-7.2 — a narrower game
Generic GEO playbooks tell firms to maximize comparative claims and results-implying testimonials. Florida attorneys can’t. The full rule is published at floridabar.org.
Rule 4-7.2 restricts
Comparative superiority without objective basis (“Florida’s best PI attorney”)
Results-oriented language (“We will win your case”)
Testimonials implying outcome
Specific dollar outcome predictions
Compliant playbook uses
Factual proximity (“position 2 in Jacksonville med mal map pack”)
Past results with disclaimers and context
Outcome-neutral testimonial framing
Schema + statute citations + bar profile sameAs
Priority action plan
Most firms skip to #4 because directory submissions feel productive. Don’t. Entity work (#1) compounds everything else.
Week 1-2
Entity fixes unlock everything else
NAP audit 15+ directories
Attorney bar-name match
Deploy LegalService + Person JSON-LD
Attorney byline + credentials on every page
40-60 word answer block at top
Claim Avvo, Justia, Martindale, GBP
Month 1-3
Content + off-site compounding
8-12 spoke pages per pillar
FAQPage schema every Q&A section
YouTube channel + transcripts
Freshness dates + quarterly reviews
Super Lawyers / Best Lawyers submission
5+ reviews/month with case-type mentions
Month 3-12
Compounding authority
Chambers USA (Tier 1 / estate planning)
Reddit presence in FL legal subreddits
Local editorial placements
Bilingual content (Miami, Kissimmee)
Monthly ChatGPT/Perplexity audits
FAQ
Questions that don’t get answered in the body above.
How long does it take to appear in ChatGPT after optimization?+
60 to 120 days for ChatGPT and Perplexity once entity and schema work is complete. Google AI Overview moves faster — 30 to 60 days with freshness date updates and IndexNow pings. ChatGPT training data updates in batches, so firms missing today may appear in the next model update.
What’s the difference between GEO and SEO?+
SEO ranks you on Google. GEO (generative engine optimization) and AEO (answer engine optimization) get you cited inside AI Overviews, ChatGPT, Perplexity, and Claude. They overlap on entity graph and schema. They diverge on passage-level citability, off-site footprint, and freshness protocols. Run them together.
Can I rank in ChatGPT without Chambers USA?+
Yes, in Tier 3-4 markets (Tampa, Orlando, Jacksonville, Brandon, and comparable mid-size Florida cities). Directory coverage is thin so your website functions as a primary AI source. In Tier 1 markets (NYC, Chicago, LA), Chambers and BigLaw directories are the baseline and website alone won’t produce citations.
Does Florida Bar Rule 4-7.2 affect AI visibility?+
Yes. Rule 4-7.2 restricts exactly the signals generic GEO guides say to maximize — comparative superiority, results-implying language, outcome-suggesting testimonials. Florida firms build AI citation authority inside a narrower window using factual proximity statements and outcome-neutral framing.
How does ChatGPT decide which firms to recommend?+
Four signals: structured data (LegalService, Person, FAQPage schema), consistent entity info (NAP match across Avvo, Justia, Martindale, state bar, GBP), trusted third-party citations (YouTube, Reddit, local editorial), and topical authority. Only 27% of accomplished firms appear in AI recommendations; ChatGPT cites just 1.2% of local businesses in any category.
Why is my law firm not showing up in ChatGPT?+
One of 4 fronts: weak entity definition (inconsistent NAP — 73% of firms), thin content architecture (68%), missing attorney E-E-A-T signals (61%), or absent off-site footprint (54%). Start with entity fragmentation — it’s the #1 cause and compounds all the others.
Free AI visibility audit
Want me to run the 3-query diagnostic on your firm?
Send me your firm name, city, and primary practice area. I’ll run the queries in ChatGPT, Perplexity, and Google AI Overview, then tell you which of the 4 root causes is your biggest gap.
Best fit: Florida PI, med mal, criminal defense, or family law firm. One firm per practice area per market; I decline the rest.
Response within 1 business day · No pitch deck · No contracts · Month-to-month after day 90
Nothing on this page is a guarantee of AI citations or Google rankings. Results vary by market tier, starting entity consistency, intake capacity, and Florida Bar compliance. Frequency percentages cross-validated against TendorAI’s 8,625-solicitor audit and Splat’s 120-query AI study. Always confirm advertising practices comply with Florida Bar rules, specifically Rule 4-7.2.