Imagine walking into a pharmacy, picking up your medication, and realizing it wasn’t prescribed by a human doctor—but by an AI. Sounds like science fiction? Well, it might not be for much longer.
The U.S. House of Representatives is reviewing a bill that could change how we think about medical prescriptions forever.
The proposed H.R.238 federal legislation seeks amendments to current regulations which could grant machines operating with AI and machine learning functionality permissions to prescribe FDA-approved medications autonomously.
What does this mean for medical prescriptions? Let’s evaluate!
Redefining “practitioner”: Will AI become a licensed practitioner?
The “Healthy Technology Act of 2025” is looking to redefine what it means to be a medical “practitioner” under the Federal Food, Drug, and Cosmetic Act (FFDCA). It proposes classifying artificial intelligence (AI) and machine learning (ML) systems as “practitioners licensed by law.”
If passed, AI and ML systems could legally prescribe FDA-approved medications, just like human doctors—provided they meet specific state and federal regulations.
Think about that for a moment. This isn’t just about AI helping doctors; it’s about AI becoming the doctor—potentially replacing a crucial part of their job.
Representative David Schweikert, who’s leading the charge, believes this bill could help streamline drug prescription processes. The bill acknowledges the growing role of AI in healthcare, evolving from diagnostic capabilities to direct treatment.
AI in Healthcare: The Good, The Impressive, and The Concerning
AI isn’t new to medicine and healthcare. It’s already extensively used in radiological image analysis, accelerating drug discovery and diagnostics.
Interestingly, research suggests that AI algorithms can execute medical tasks with remarkable precision, sometimes surpassing human performance in specific evaluations. That’s the reason the global AI in healthcare market is projected to reach USD 187.7 billion by 2030.
Why AI-powered prescriptions could be a gamechanger
Incredible accuracy and efficiency
AI has the capabilities to process large quantities of medical data, find patterns and comply with evidence-based protocols. This reflects its ability to deliver precise medicine prescriptions.
For instance, a hospital in Chelsea and Westminster utilised AI to analyse images of suspicious moles and detect melanoma. Within seconds, AI delivered results with 99.9% accuracy.
Reduce administrative burden on doctors
Imagine doctors freed from the daily grind of prescription management. Automating daily prescription management activities through AI would enable healthcare providers to dedicate their time to handling advanced patient treatment needs.
Takeaway: AI shows potential as a valuable instrument for drug prescription accuracy and efficiency through its large-scale data processing abilities.
But there are big risks too
Of course, it can’t be all rosy. There are a set of challenges and concerns with the implementation of AI drug prescriptions.
Hallucinations and errors
AI systems, notably generative models such as GPT-4, generate false or misleading information, commonly referred to as “hallucination”, which causes incorrect outputs.
Within medical settings, incorrect information from AI systems could have adverse implications. Imagine an AI misdiagnosing a patient and prescribing the wrong drug or overdosing. Disastrous consequences—the stakes are incredibly high.
Lack of rationale transparency
Studies show AI provides highly accurate results and has outperformed humans in medical tasks and objective medical assessments. However, AI’s underlying reasoning is often flawed or unclear.
If the AI is unable to explain why it prescribed a particular drug, can it be trusted? This raises questions about its reliability in medical decisions.
Navigating regulatory and ethical terrain
This is where things get tricky. The FDA’s current approval pathways are designed primarily for conventional medical devices and pharmaceuticals. They must undergo sweeping changes to accommodate AI-driven prescription models. We need a whole new regulatory landscape.
There are profound ethical implications of autonomous AI-driven drug administration. Issues relating to liability, patient consent, and the scope for algorithmic bias need sensitive handling. And then there is the accountability question:
If an AI system makes a mistake, who is responsible? The software developers? The hospital? Or the AI itself? Current laws aren’t equipped to handle these questions, and that’s a major issue.
Takeaway: Ensuring patient safety and maintaining trust in AI systems will require robust regulatory frameworks and ongoing oversight. The need for rigorous validation and comprehensive evaluations of AI’s rationale before its integration into clinical practice is paramount.
Current applications of AI in healthcare
We’re already seeing AI adoption in various ways across the healthcare industry:
- Ambient documentation: AI is helping doctors by automating note-taking, reducing paperwork, and giving them more time for patient care.
- Drug discovery: AI-driven systems are accelerating the development of new medicines by analyzing complex biological data.
- Diagnostic support: AI is improving medical imaging analysis, making it easier to detect abnormalities with high accuracy.
Does successful integration of AI in these areas suggest AI-powered prescription is the next logical step?
The road ahead: What needs to happen
Currently, this proposed legislation is in its infancy. It has been handed over to the committee and will undergo some serious scrutiny. If it does get the green light, it will allow for widespread AI use in clinical decision-making.
However, before AI starts prescribing drugs like a doctor, a lot needs to be done:
Regulatory oversight
Implementing AI systems will require stringent standards—strict validation, rigorous testing and clear guidelines with no room for shortcuts. It is not just about “Does it work?” but “Does it work safely and reliably?”
Collaboration with clinicians: AI + Doctors
The aim with AI shouldn’t be to replace doctors, rather it should be about enhancing their capabilities. Only continued collaboration between AI developers and healthcare professionals can make AI in healthcare a success.
Patient education
If AI starts giving out prescriptions, it’ll be essential to educate patients about how these systems work, their benefits and drawbacks, and the security measures to protect against harm and privacy violations. Trust and transparency will be essential.
Takeaway: Implementing successful AI medication prescribing depends on multilateral healthcare professional cooperation, supported by comprehensive regulatory frameworks and patient-focused treatment methods.
The bottom line
AI is already transforming healthcare and could probably prescribe medications as well. The potential benefits are huge—greater efficiency, fewer errors, and improved patient outcomes. But with great power comes great responsibility. Safeguarding patient well-being and upholding the integrity of medical care are a must.
To get this right, we need strong safety measures, ethical guidelines, and regulatory frameworks that balance innovation with patient protection. We need to find that sweet spot between pushing the boundaries of medical technology and ensuring that patient safety and the integrity of medical care are never compromised.
The future of AI in medicine isn’t about replacing doctors. It’s about creating a healthcare system where human expertise and AI intelligence work hand in hand to deliver the best possible healthcare.
So, are we ready for AI-powered prescriptions? The debate is just getting started.
-By Alkama Sohail and the AHT Team