Artificial Intelligence: The Next Regulatory Frontier

Marisa Polowitz

Senior Editor

Loyola University Chicago School of Law, JD 2023

Until recently, Artificial Intelligence (AI) was the domain of science fiction connoisseurs and Silicon Valley tech savants. Now, AI is ubiquitous in our daily lives, with a seemingly endless number of possible applications. As with any new and emerging technology, there are many novel questions and concerns that need to be addressed. Whether it be related to copyright ownership, ethics, cybersecurity obstacles, or discrimination and bias, concerns surrounding AI usage are mounting. AI system regulation has been rapidly increasing worldwide, while the U.S. regulatory landscape has remained relatively sparse. But it won’t be for long.

What is AI and why does it matter?

Simply put, AI is the use of algorithms in machines programmed to simulate human intelligence and decision making. AI algorithms heavily rely on the collection of massive amounts of data, posing amongst many issues, concerns in regard to data privacy. AI use is growing in almost every field imaginable – and many unfathomable ones, too. Autonomous vehicle development relies on AI, more and more applications pop up within healthcare every day, AI-driven robo-advisors are widely utilized in the financial sector, and AI analysis has even been used for government discovery of property tax evasion. Banks use AI to determine loan eligibility, and AI based law enforcement applications are numerous.

With such a broad range of potential applications, we encounter AI-driven tools constantly. They shape our experiences. While U.S. federal data privacy regulation has been slow to catch up to the rapidly increasing need, AI is, to some extent, still in a relatively nascent stage of development. The impact moving forward will only continue to grow, and we still have time to address potential problems before they become irreparably harmful. We are at a moment in time where we can proactively work to ensure that the legal landscape reflects the technological one. This poses an opportunity for regulatory bodies to get ahead, rather than be left scrambling to catch up.

Existing foreign regulation

As of earlier this year, over 60 countries had enacted policies pertaining to AI – all since 2017. EU’s General Data Protection Regulation (GDPR) includes clauses pertaining to AI. But more recently the EU put forth a pointed regulation specifically aimed at AI – the AI Act. Furthermore, the EU passed the Digital Services Act (DSA) in July, which will impact many large online platforms. The DSA requires, amongst many things, transparency for how decisions were made in regard to user-facing advertisements, limitations on recommendation systems, risk auditing, and protections for minors and advertisements targeting children.

Canada recently proposed the Digital Charter Implementation Act, aimed at regulation of AI systems. Brazil passed an AI bill in 2021, and China in 2022.

With China and the EU, two of the world’s largest economies, making large scale progress in AI regulation, globally active organizations are sure to be heavily impacted.

Existing regulation in the U.S.

More than 17 U.S. states introduced AI-related legislation in 2022. Recently passed state consumer privacy legislation in California, Colorado, Virginia and Connecticut include elements addressing automated decision-making and the processing of personal information for the purposes of profiling, in addition to risk assessment requirements. Laws are being entertained down to the municipal level in an effort to reign in the use of AI-driven technologies. New York City recently released proposed rules pertaining to employment tools utilizing AI. In 2021, Detroit’s city council issued an ordinance requiring accountability and transparency for surveillance systems utilized by the city.

On the federal level, there has been a segmented approach to AI regulation. The Biden Administration recently released its AI Bill of Rights, a set of recommendations, not policy. The US National Institute of Standards and Technology (NIST) has taken steps in developing federal standards for managing risks associated with AI development. The Equal Employment Opportunity Commission (EEOC) issued guidance earlier this year with suggestions for the use of AI employment tools. The FTC announced it was beginning its rulemaking process for addressing AI, algorithms, discrimination, and organizational AI governance and compliance programming.

The FDA released new guidance on AI use in medicine, which, interestingly, was met with mixed reactions. Multiple financial regulatory agencies are reviewing how financial institutions use AI.

From all of these efforts, one thing remains clear: the U.S. is executing a fragmented approach to AI, on every level.

Cohesive global policy

As major world players pass progressive AI regulation, the U.S. is sure to be influenced by those countries’ regulations which pre-date our own. Similar to the ways that GDPR influenced the many state-based consumer privacy regulations, American agencies are looking outside the U.S. for models on building meaningful best practices for AI systems. The U.S. and the EU seem to be moving in the same direction when it comes to AI regulation – and hopefully it continues in that direction.

When it comes to regulating technology, the global landscape is already rife with nuanced complexity from country to country, and in the U.S., from state to state. If regulation pertaining to AI continues as it is as of now, it risks falling prey to the same patchwork problem already seen in data privacy.

Data does not stay within state or national boundaries, and neither does technology. Technology developed in one country may be used across the world, and incorporate data collected from a broad range of locales. Increasing usage of AI for military applications poses a real risk to national security. AI tools are increasingly being leveraged to gain battlefield intelligence, in waging information wars, and in both cyberattacks on, and cyber defense of critical infrastructure. The use of cyberattacks are increasingly impacting the physical world. AI can be manipulated to a malicious end to execute highly sophisticated cyberattacks. The need for global agreement on how to approach emerging tech and artificial intelligence is urgent.

Global standardization for AI systems will not solve every problem posed by the rapidly developing technology. It could, however, ensure better functionality, more consistent standards, increased transparency, and hopefully help to create more equitable and just systems.