The Regulatory Framework of Our Data Privacy Legislation is Changing Amidst the Rise of Artificial Intelligence

Peter Hanna

Associate Editor

Loyola University Chicago School of Law, JD 2026

With the rapid evolution of artificial intelligence (AI) comes unprecedented opportunities and legitimate challenges, especially in the realm of data privacy. The rising capabilities of AI systems to process, analyze, and use massive amounts of personal data has generated amplified regulatory scrutiny across the globe. Governments and regulatory bodies are wrestling with how to balance innovation and economic growth propelled by AI against the need to protect individuals’ privacy, ensure transparency, and safeguard data from misuse.

The European Union is considered the leader in implementing this type of legislation, with their enactment of the General Data Protection Regulation (GDPR) in 2018. However, the United States has not followed suit with any sort of nationwide legislation being enacted in this area. Individual states, such as California and Virginia have taken matters into their own hands and enacted their own “GDPR equivalent” types of legislation. As for states without updated legislation in this area, consumers are at a much higher risk of having their data breached and their personal information misused. There is a solid argument to be made that enacting some sort of federal regulatory framework in this area of law will help the states get on the same page when it comes to data privacy, but there are also compelling arguments for state autonomy and differing privacy standards across the United States from region to region.

Why does AI affect these privacy concerns so heavily?

As technology and AI become more advanced, so does the ability of this same technology to significantly affect our countries abilities to protect consumers and their personal data. For example, a Stanford University study says that AI has now reached or even exceeds human capabilities in regards to reading comprehension, image classification, and complex mathematics. Not only does AI come with possibilities of data breaches, but studies also show that the technology raises issues of bias in generated educational content and other areas. Further, one of the most concerning aspects of AI is a possible deficiency in its ability to show exactly how an algorithm may use, collect, or even alter or make decisions based on that data. There are also data repurposing, spillover, and persistence concerns. This means that data is being used by AI beyond its intended purpose and without the subject’s consent, along with data being collected from individuals who were not supposed to be included in the data sample.

The California Consumer Privacy Act as a model

Passed in 2018, California’s Consumer Privacy Act (CCPA) offers consumers more control over their personal data, including the right to know what information is being collected, the right to delete it, and the right to opt out of its sale. The California Privacy Rights Act (CPRA), which took full effect in 2023, strengthens these provisions by creating a dedicated privacy enforcement agency and imposing stricter rules on businesses that handle large volumes of sensitive data. Opponents of this type of legislation argue that it may create extra, unwanted compliance burdens on large companies, especially those that operate nationwide and have to adhere to one set of rules while operating in California, versus a completely different approach taken by other states. They argue that compliance with these varied regulations comes with extremely high costs, which will eventually be taken on by consumers. This argument is not without merit, since some other states have enacted similar types of legislation, such as Virginia, Colorado, Delaware, and a few more. However, none are nearly as strict as California’s—while other states still do not have any sort of legislation passed. This leads to the belief that a full federal framework is necessary, which can both simplify the ability to stay compliant with the rules and regulations, while still assuring our consumers receive the necessary level of protection.

The CCPA and CRPA would be solid models for use by the federal legislature and regulatory bodies. Using California’s legislation as a starting point would serve as a good first step in figuring out a way to get the entire country on the same page when it comes to this type of legislation. The White House did release a Blueprint for an AI Bill of Rights, a nonbinding guide for users of AI and large companies that are beginning to heavily utilize the tool. This is a great start, but it is important that the federal government go beyond this and enact nationwide legislation for further protections for the people who make the economy work: consumers.