UpToDate and Medscape have launched their own clinical AIs. But are confined models the right way to go?

How two of healthcare’s biggest knowledge platforms are building AI tools that value trust and safety over raw power.
UpToDate and Medscape clinical AI tools in healthcare

AI is quickly becoming part of the clinical toolkit. And the race to bring reliable AI to the clinic is heating up.  To match the growing demand, healthcare giants UpToDate and Medscape have entered the ring with a new kind of clinical assistant.

Both have launched their own clinical AIs, built on doctor-vetted medical knowledge.

Their approach is to lock AI inside a “walled garden.” A controlled environment where every answer comes from verified medical sources, not random web pages. They want to make AI useful without making it risky.

But are such clinical AIs really credible? Let’s explore.

The problem: AI can hallucinate, and that’s dangerous in medicine

General AI tools like ChatGPT are powerful and brilliant conversationalists, but they often “hallucinate.” Confidently generate wrong or misleading information. In medicine, that’s not just inconvenient; it’s unacceptable.

So, how do you harness the power without the peril?

That’s the reason UpToDate and Medscape are building AIs that can’t wander, using the walled garden approach. Instead of letting their models scrape the entire internet, they’re confined to a trusted, proprietary database of expert-reviewed medical libraries.

This setup, known as Retrieval-Augmented Generation (RAG), acts like a filter. The AI retrieves only from verified sources before forming an answer. It’s like having a chatbot that reads only peer-reviewed journals and clinical guidelines before speaking.

As a result, unverified or misleading information may never creep into a doctor’s decision-making process.

UpToDate’s Expert AI: Clinical intelligence with boundaries

This September, Wolters Kluwer Health rolled out UpToDate Expert AI in hospitals across France and Belgium. An evolution of its 30-year-old clinical decision support tool, now powered by generative AI.

What makes Expert AI stand out is its Clinical Intelligence Framework, which keeps the AI grounded in UpToDate’s own library, written and reviewed by over 7,600 medical experts. It covers more than 13000 clinical topics across 25 specialities.

No unverified sources, no hallucinations, just evidence-based insights clinicians can trust. Each AI answer also comes with built-in transparency:

  • The original source material
  • The key assumptions the AI made.
  • A clear reasoning trail

This design lets clinicians see exactly how the AI arrived at its answer, reinforcing trust.

Medscape’s AI Search: Reliable answers, not risks

WebMD’s Medscape is taking a similar focused approach to clinical AI. One that’s built on trust and clarity.

Earlier this August, it launched its Medscape AI Search, giving U.S. physicians a chat-based way to get fast, evidence-backed answers to complex medical questions.

Like UpToDate’s system, Medscape’s AI doesn’t roam freely online. Every response is pulled from Medscape’s own physician-reviewed content. A library of over 6,000 evidence-based and physician-reviewed disease and condition articles clinicians have trusted for years.

The result is quick, reliable, and verifiable answers. Each response includes concise summaries and direct links to the original source, giving doctors both speed and confidence in what they’re reading.

Medscape AI search, clinical AI tool
Source: Medscape on LinkedIn

The “Walled Garden” debate: Safety comes with trade-offs

For maximum clinical safety, UpToDate and Medscape’s AI tools operate within a “walled garden” of their proprietary content. But confining AI to a curated environment also introduces a new set of challenges.

The upsides

  • Curbed hallucinations: Limiting the AI’s “knowledge domain” leads to higher accuracy, fewer made-up facts.
  • Built-in security & privacy: These systems are engineered to meet tough standards like HIPAA. They lock down patient data, unlike more open models that can be vulnerable to manipulation and data leaks.
  • Designed with clinician workflows in mind: both platforms already serve huge numbers of users and integrate into care.
  • Easier regulation: A closed system makes compliance simpler. With a documented source, there’s full transparency about where information comes from and how the algorithms work.

The downsides

  • Amplifying existing bias: If the underlying medical literature contains historical biases or gaps, the AI learns and can even magnify those same inequities, baking them into its clinical suggestions.
  • Dangerous knowledge gaps: The AI’s knowledge is only as good and as broad as its source.  Populations underrepresented in the data, like children, minorities, or rare conditions, might not get accurate answers.
  • Miss out on new research: Because the model is constrained, it may miss novel insights or cross-discipline connections the open internet models might pull in.
  • Vendor lock-in: Hospitals risk a strategic dependency by building clinical support on a vendor’s closed system. This can limit their control over their own data and AI tools, making it difficult to adapt or customise the technology for their specific needs.

So while UpToDate and Medscape are clearly steps forward, their method is safer but more deliberate. And this may or may not match every clinician’s or health system’s needs.

What do experts say: Is the “Walled Garden” enough?

Experts are cautiously optimistic. Many agree that using RAG-based models, which blend general AI with verified content, is a smart step forward. But they warn it’s not a cure-all.

You cannot simply build a high-performing AI from a single, curated source like UpToDate or Medscape.

As Karl Swanson, a physician and data scientist, points out, these models likely still rely on large general-purpose AI under the hood. “RAG reduces hallucinations,” he explains, “but doesn’t eliminate them.”

In other words, even the most disciplined AI can get things wrong, just less often.

Even with “safe” AIs, human oversight remains essential. Clinicians must verify the AI’s answer, check the source, check the assumptions, and accordingly integrate it into clinical judgment.​

So, is confining AI the right move for healthcare?

The short answer: Yes, but with nuance.

In healthcare, where decisions have major consequences and mistakes can cost lives, the “walled garden” approach of UpToDate and Medscape makes a lot of sense. It prioritises trust, reliability, and integration into existing clinician workflows.

The big challenge now is scaling that reliability while maintaining breadth and equity. If these tools can build large, diverse libraries, maintain global relevance, and keep clinician workflows central, they might set a new standard in clinical AI.

For now, UpToDate and Medscape’s AIs mark a turning point in clinical decision making, and we’re excited to see how they will reshape AI in healthcare.

-By Alkama Sohail and the AHT Team

Total
0
Shares
Previous Post
Mayo Clinic's 2025 platform accelerate program

11 healthtech startups of Mayo Clinic’s 2025 Platform Accelerate Program, October cohort

Next Post
CB Insights Q3 2025 Digital Health report

Q3 2025 digital health trends: CB Insights report shows funding still falling, AI still rising

Related Posts