Regulating the Un-Explainable: The Difficulties in Regulating Artificial Intelligence

Diana Akmakjian

Associate Editor

Loyola University Chicago School of Law, 2020

From Siri to Alexa, to deep learning algorithms, artificial intelligence (AI) has now become commonplace in most peoples’ lives.  In a business context, AI has become an indispensable tool for businesses to utilize in accomplishing their goals. Due to the complexity of the algorithms required to make quick and complex decisions, a “black box problem” has emerged for those who utilize these increasingly more elaborate forms of AI. The “black box” simply refers to the level of opacity that shrouds the AI decision-making process. While no current regulation explicitly bans or restricts the use of AI in decision-making processes, many tech experts argue that the black box of AI needs to be opened in order to deconstruct not only the technically intricate decision-making capabilities of AI, but the possible compliance-related problems this type of technology may cause.

AI and the Black Box Problem

Current existing methods for analyzing machine-learning are limited, which lead to the coining of the term “black box” to reflect the fundamental misunderstanding by humans of computer learning. Author Jason Bloomberg explains the heart of the black box problem, “if people don’t know how AI comes up with its decisions, they won’t trust it.” These algorithms are “capable of learning from massive amounts of data, and once that data is internalized, they are capable of making decisions experientially or intuitively like humans.” The problem stems from the inexplicability of the decision-making process, despite the fact that programmers created the algorithms that the AI utilized to process information. The “explainability” of AI is simply human programmers’ capacity to explain how the algorithm processes information to ultimately make decisions. Computers have evolved beyond strictly executing written instructions and can actually consume information to perform problem-solving functions. The lack of understanding and control over this process has become deeply concerning to many, especially when considering AI implementation is increasingly expanding.

As humans more frequently entrust decision making to automated AI, there are two potential avenues for disaster: when AI is programmed to do something devastating to human life or when AI is programmed to do something beneficial but develops a destructive methodology for accomplishing its initiatives. Examples of devastating AI include autonomous weapons that are manipulated through AI systems which could be manipulated to cause mass casualties. Conversely, AI created for the purpose of benefitting society can often fail when the AI’s goals and means no longer align, and the AI takes on destructive means to accomplish the task. Imagine an ambulance speeding to the hospital and causing an accident which ends up killing more people. The issue is reduced to the competence of the AI and its programmers.

Regulatory and Compliance Problems with AI

The un-explainability of sophisticated AI is the obstacle to broader regulation and implementation. While the potential benefits of broader implementation are vast, AI implementation comes with a fair number of tradeoffs. These tradeoffs are exemplified by a data project utilizing patient data.

In 2015, a research group at Mount Sinai Hospital applied “deep learning,” a form of AI that analyzed data sets and variables, to the hospital’s vast database of patient records. The AI, which researchers named Deep Patient, was trained using data from about 700,000 individuals and proved to be “incredibly” capable at detecting a wide range of ailments. Joel Dudley, who leads the Mount Sinai team, commented “we can build these models, but we don’t know how they work.” The researchers deemed Deep Patient, “a bit puzzling” because it appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well, but schizophrenia is notoriously difficult for trained physicians to predict. Although researchers are capable of building these computer models, the outcomes they produce are completely baffling to the teams that built them.

The Deep Patient project exemplifies not only the black-box problem, but the problem companies face when ensuring compliance with any regulation imposed on the AI. How can companies ensure their AI is compliant when they are unsure of exactly how the AI performs its functions? When it comes to compliance, the black box problem is the hurdle standing in the way of AI’s march to global integration. If companies cannot demonstrate that AI has been correctly programmed to perform its functions, people may be less likely to trust AI-powered systems. Further, there is no guarantee that AI can continue to perform functions in compliance with any regulation when the AI’s decision-making process is largely secretive.

Explainability has become a critical requirement for AI in many contexts beyond healthcare. With the wide implementation of AI in “every-day” contexts, like Alexa or Siri, explainable-machine learning is a requirement when these features are available on almost every mobile phone sold these days. Beyond explainability, protection of the data that AI processes is also crucial. Elon Musk, the founder of Tesla, tweeted “We need to be super careful with AI… I’m increasingly inclined to think there should be some regulatory oversight [of AI], maybe at the national and international level.”

Authors Amitai and Oren Etzioni suggest a Cyber Age Commission as a means of regulatory oversight. The Cyber Age Commission would be akin to, “the highly influential 9/11 Commission and include respected former officials from both political parties, select business chief executive officers and labor leaders, and AI experts. They would examine alternative responses to the looming job crisis and its corollaries.” While any form of oversight imposed on AI would begin to tackle the growing number of concerns with its use, this form of regulation seems to be some ways away, given that experts still struggle to explain the mysterious functions of AI.