From Chatbots to Diagnosis: The Power and Pitfalls of AI in Healthcare

 

Katherine O’Malley

Associate Editor

Loyola University Chicago School of Law, JD 2025

 

The capabilities of generative artificial intelligence (AI) could completely transform our healthcare system as we know it. For better or for worse, the technology advancements in healthcare are rapidly growing. Given the accelerated rollout, experts have yet to predict all the risks associated with such high-functioning computations in the healthcare system. Even though the Food and Drug Administration (FDA) regulates software being used as medical devices (SaMD), there is an overall lack of urgency, agency oversight, and sufficient regulations to tame AI technology in the healthcare system. 

What is Generative AI technology?

Generative AI technology generally refers to supercomputers that can manage complicated chatbot features replicating human-like interactions in the form of a search engine. AI is not limited to chatbots like “Chat GPT.” Models such as “Stable Diffusion, Synthesia, and MusicLM” can produce images, video, or audio. This technology employs advanced machine-learning techniques that analyze patterns within their training data to create new content based on human-submitted prompts. Although significant advancements have been made recently in generative AI, this technology started in the 1950’s.

With every benefit comes a risk

Generative AI can make everyday administrative tasks more efficient, such as emails, task management, and innovations. AI can analyze data to improve decision-making in all industries, especially in the healthcare sector.

The effects could help people gain access to diagnostic treatment by automating mundane work done by licensed professionals who could then spend more valuable time face-to-face with patients. AI algorithms have outperformed radiologists when spotting malignant tumors. AI is also used to analyze population health and predict populations at risk of certain diseases or readmission to hospitals. AI-assisted robotic surgeries have even proved to decrease complications in procedures as compared to surgeons operating alone.

Even though AI algorithms outperform radiologists in one area, there is a risk of bias due to a lack of data robustness. AI could also have detrimental effects concerning data breaches and violate HIPAA. For example, if a doctor’s office uses an AI chatbot feature for intakes and pre-diagnoses, that could violate HIPAA since the data has left the protected healthcare system.

FDA’s ongoing attempt to regulate AI in the healthcare system

In January 2021, the FDA announced its AI/Machine- Learning SaMD Action Plan to protect against the risks posed to the healthcare sector. The FDA plans to coordinate this action plan with the agency’s medical device cybersecurity program. Furthermore, the FDA emphasizes a need for transparency with patients to tackle issues such as usability, equity, trust, and accountability. The plan encourages manufacturers of AI-based devices to be transparent. This action plan lacks measurable goals, step-by-step actions, or designated responsibilities. Overall, it’s filled with vague and conclusory statements. For example, it states the following without explanation of how or who: “[t]his will be accomplished in coordination with other ongoing FDA programs focused on the use of real-world data.”

The FDA has released many draft recommendations since the January 2021 action plan. The first assesses the credibility of computational modeling and simulation in medical device submissions. Another draft guidance proposed a risk-based approach to computer software assurance directed towards manufacturers of AI/ML-enabled products. All of which are drafts and used for comment purposes only.

Next, in September of 2022, the FDA released the final Clinical Decision Support (CDS) Software guidance. This provides a framework for determining when CDS software functions are included and excluded from the definition of device under section 520(o)(1)(E) of the FD&C Act. In other words, the framework explains when a software function may or may not be exempt from FDA regulations.

In April of this year, the FDA published another draft guidance on using AI in medical devices. This draft states that the FDA wants to provide the least burdensome approach to support innovation while simultaneously promoting reasonable assurance of device safety. This more lenient draft seems to be in response to the uproar AI developers gave to the September 2022 final guidance.

Although the FDA must balance patient protection without impeding scientific innovation, human protection should take precedence. There are even several tech executives and top AI researchers calling for a six-month pause in the development of AI tools. The letter from several tech executives and top AI researchers advocates for new and capable regulatory authorities dedicated solely to AI. In addition, they call for oversight from an unbiased third party and tracking of highly capable AI systems. Even the CEO of one of the most infamous AI systems, stated in March that, “[a]t some point, it may be important to get an independent review before starting to train future systems…”

The point has been reached where government agencies, like the FDA, need to limit this technology as it has already overridden essential safety measures such as Captcha. ChatGPT’s CEO is now calling on Congress to create a new government agency focusing solely on regulating AI technology. The development of AI generative systems should halt so the government can create a new agency to regulate AI in healthcare.