Emerging Risks Associated with AI and Machine Learning

Alayna Frauhiger

Associate Editor

Loyola University Chicago School of Law, JD 2021

Today the healthcare industry is being transformed using the latest technology to meet the challenges it is facing in the 21st century. Technology helps healthcare organizations meet growing demands and deliver better patient care by operating more efficiently. As the world population continues to grow and age, artificial intelligence (AI) and machine learning will offer new and better ways to identify the disease and improve patient care.

Current use of AI and machine learning

AI can help doctors predict the probability of an existing condition that can help streamline diagnosis and treatment plans. AI algorithms, powered by recent advances in evidenced-based research, learn from historical real-time data and can make predictions about the future probability of a condition.  These predictions offer a unique opportunity to see into the future and identify trends in patient care.  Furthermore, AI and machine learning can also help advance genomic medicine by allowing for personalized treatment plans and clinical care in areas such as diagnostics, medical images, surgery, and more.

Risks surrounding AI

In addition to bringing several benefits, AI, like any disruptive technology, will also introduce new risks to society.  A risk emerging for predictive analytics includes the centralization of data which presents a tremendous risk in terms of security and integrity of the data. Given the increasing amount of data that is often stored in the cloud or otherwise accessible via the internet, there is the persistent threat of hacking from individuals with malicious intent.

The U.S. Department of Health and Human Services’ (HHS) Office of the Inspector General (OIG) is the governmental agency responsible for protecting patient privacy, ensuring quality care, and combating fraud by ensuring healthcare organizations are compliant with federal healthcare laws. Throughout all technological advances, HHS has been in charge of enforcing privacy laws to assure that individuals’ health information is properly protected while allowing the proper flow of health information needed to provide and promoting high-quality health care and to protect the public’s health and well-being. However, HHS’s role might need to move past just enforcement of these older privacy laws as HIPAA compliance for medical software applications can be a complicated issue to understand.

For instance, some eHealth and mHealth apps are subject to HIPAA and medical software regulations issued by the FDA but depending on the nature of the app´s function some are not. Any eHealth app that collects personal data about the person using it for the exclusive use of the person using it, the app is not subject to HIPAA compliance for medical software applications. If, however, the personal data collected will be shared with a medical professional or other HIPAA Covered Entity, like a healthcare insurance company, for example, then the data is considered to be Protected Health Information and the app needs to be HIPAA compliant.

FDA clearance

Moreover, AI systems may not be properly tested within clinical trial phases, which is an initial step to receive clearance from the Food and Drug Administration (FDA). Clinical trials are used to demonstrate that a drug, device, or other manufactured product is safe and effective under a structured set of criteria. AI systems continue to adapt and change after these clinical trials conclude, which can lead to unpredictable and unreliable results. Although the FDA is responsible for overseeing the safety and efficacy of medical devices, it has not yet provided a foundation to combat potential safety and compliance issues related to AI systems.

Implementing AI and machine learning

AI is proving to be a double-edged sword in the healthcare industry. Beyond being beneficial for several reasons, AI also comes with far-reaching implications for the economy, healthcare, security, and the environment. And although AI and machine learning are not unique to healthcare, because patient lives are at risk, there are significantly more important quality and safety measures that need to be considered. As the health care industry begins to use new technologies, government health agencies, doctors, and primary health providers must be aware of risks and agree on standards. Insight into daily practices will be critical to ensure they are located within regular routines and are not disruptive to the healthcare system.