Patients are asking AI assistants to recommend doctors, dentists, and specialists. Here's how those recommendations actually get made — and what multi-location practices can do about it.
A growing number of patients are bypassing Google entirely. Instead of typing "best dentist near me" into a search bar, they're asking ChatGPT, Claude, or Perplexity: "Can you recommend a good dentist in Tampa that does implants?" or "I need an orthopedic surgeon near Fort Lauderdale who specializes in ACL reconstruction — who should I see?"
The AI responds with specific names, practices, and reasoning. Sometimes it's accurate. Sometimes it's outdated. Sometimes it invents practices that don't exist. But the trend is clear: conversational AI is becoming a discovery channel for healthcare providers, and most multi-location practices have zero strategy for it.
This is fundamentally different from SEO. There's no ranking algorithm to optimize for. No search console to submit your sitemap to. No way to verify what a language model "knows" about your practice. But there are things you can control — and the practices that build for this channel now will have a meaningful advantage as patient behavior continues to shift.
When a patient asks ChatGPT or Claude to recommend a provider, the model draws from two potential sources, depending on the product configuration.
Training data. Every large language model is trained on a massive corpus of web content captured at a point in time. If your practice had a substantive web presence when that data was collected — detailed provider bios, specific procedure descriptions, mentions across multiple authoritative sources — the model may have internalized that information. If your website was thin, templated, or indistinguishable from 50 other locations in your network, the model likely has nothing meaningful to say about you.
Real-time web search. ChatGPT with browsing, Claude with search, and Perplexity all search the web in real time to supplement their training data. When a patient asks for a provider recommendation, the AI issues search queries, retrieves results, reads pages, and synthesizes what it finds into a recommendation. This is the mechanism you can most directly influence — because it's essentially web search, filtered through an AI's judgment about what constitutes a credible, relevant answer.
The critical distinction from Google: an LLM doesn't return a list of ten links. It makes a judgment call. It recommends one, two, maybe three providers and explains why. To be one of those recommendations, your practice's web presence has to give the AI enough information to form a confident opinion — and that opinion has to be positive.
LLM recommendations aren't random. When a model searches the web and synthesizes a recommendation, observable patterns determine which practices get cited and which get ignored.
Entity specificity. LLMs work with entities — named people, named procedures, named technologies, named locations. A page that says "Our experienced team provides comprehensive dental care" gives the model nothing to work with. A page that says "Dr. Maria Torres, a board-certified prosthodontist, performs full-arch implant reconstructions using guided surgery and CEREC same-day restorations at our Coral Springs location" gives the model a dense cluster of specific, verifiable entities it can confidently reference in a recommendation.
Information depth per provider. When a patient asks "who's a good orthopedic surgeon for ACL in Fort Lauderdale," the AI needs enough information about a specific provider to form a recommendation: their training, their specialization, their approach, their credentials. A one-paragraph bio doesn't cut it. The practices that get recommended have provider pages that read like substantive professional profiles — the kind of depth that lets an AI say "Dr. Chen at Southeast Orthopedics specializes in ACL reconstruction using minimally invasive techniques, completed a sports medicine fellowship at Johns Hopkins, and has performed over 2,000 knee procedures."
Corroboration across sources. LLMs are more confident in recommendations when the same information appears across multiple independent sources — the practice website, provider directories, professional association listings, hospital affiliations, and review platforms. Consistent, specific information across sources signals to the model that the entity is real, active, and credible. Inconsistent or contradictory information (different credentials listed on different sites, outdated addresses, conflicting specialties) erodes the model's confidence.
Conversational FAQ alignment. People ask LLMs compound, contextual questions: "I'm nervous about getting a dental implant, what's the recovery like and how do I find someone good in Tampa?" The content that gets retrieved for these queries isn't your service page — it's long-form FAQ content that directly addresses the real concerns embedded in the question. Practices whose websites answer these compound questions in natural language give the AI exactly the kind of content it needs to form a helpful, specific response.
Structured data signals. When an LLM's search retrieval system encounters a page with Physician schema, MedicalProcedure schema, or MedicalOrganization schema, it can extract structured facts with high confidence. Schema markup doesn't guarantee a recommendation, but it removes ambiguity — and for an AI trying to decide between two similar-looking practices, the one with machine-readable structured data is easier to trust.
Most multi-location healthcare websites are structurally invisible to LLMs. Here's why:
Templated content across locations means no individual location has enough unique, specific information for an LLM to distinguish it. If 50 dental practices in your DSO share near-identical "Dental Implants" service pages, an AI searching for implant providers in a specific city has nothing that differentiates your Tampa location from your Orlando location — or from a competitor.
Generic provider bios give the model nothing to recommend. "Dr. Smith has been practicing dentistry for 15 years and is passionate about patient care" is the kind of sentence that appears on thousands of dental websites. It doesn't tell the AI what Dr. Smith specifically does, what makes their approach distinct, or why a patient should choose them.
Missing structured data means the AI has to guess what your pages are about instead of knowing definitively. A page with MedicalProcedure schema that explicitly states the procedure type, the performing provider, and the location is unambiguous. A page without it is just another blob of text that may or may not be relevant.
The compound effect: when a patient asks an AI for a provider recommendation in your market, the AI searches the web, finds your competitors' detailed provider profiles and specific clinical content, and recommends them — not because they're better clinicians, but because the AI had more to work with.
Patient acquisition costs vary by specialty, but the economics are consistent: each new patient represents significant lifetime value, and any channel that influences provider selection is worth investing in.
In dental, the average practice spends $150–$300 to acquire a new patient through traditional marketing channels, while each patient generates $2,000–$5,000 in lifetime value. A general dental practice targets 25+ new patients per month per full-time dentist. For a 15-location DSO, that's 375 new patients per month across the network — each one influenced at some point by an online search or AI recommendation.
In medical aesthetics, the numbers are even more compelling. The average medspa generates approximately $2 million in annual revenue per location, with an average spend per visit of $536 and patients returning 2–4 times per year. The cash-pay model and recurring treatment cycles (Botox every 3–4 months, filler maintenance every 6–12 months) create high customer lifetime values. For a multi-location aesthetics platform, each patient lost to a competitor's better AI visibility compounds across the treatment lifecycle.
In orthopedics, individual procedures like ACL reconstruction ($20,000–$50,000) or joint replacement ($30,000–$75,000) mean that even a handful of patients influenced by an AI recommendation represents meaningful revenue. When a patient asks ChatGPT for an orthopedic surgeon recommendation and your provider isn't in the response, that's a five-figure revenue event that went to a competitor.
The question isn't whether LLM discoverability will matter — it's whether your web presence is giving AI systems enough information to recommend your providers. The practices that invest in this now are building an advantage that compounds as AI-driven discovery becomes a larger share of patient acquisition.
To be clear: no one can guarantee that ChatGPT or Claude will recommend your practice. LLM outputs are non-deterministic — the same question asked twice can produce different answers. There's no "LLM SEO" methodology with predictable outcomes. Anyone claiming otherwise is overpromising.
What you can control is the quality, specificity, and structure of your web presence. You can produce content that gives AI systems the raw material they need to form confident, positive recommendations. You can ensure that when an LLM does search for providers in your specialty and market, it finds substantive, entity-rich, structured information about your providers and locations.
The deliverables that directly influence LLM discoverability:
These aren't speculative — they're the content characteristics that observably correlate with practices being surfaced by AI systems. And they have the useful property of also improving your traditional SEO, your Google AIO visibility, and the quality of your web presence for human patients who visit your sites directly.