America’s Fractured Approach to AI Regulation

Ryan Mack

Associate Editor

Loyola University Chicago School of Law, JD 2027

Federal efforts to promote artificial intelligence (“AI”) innovation by avoiding comprehensive regulation has prompted state legislatures to fill the regulatory void, creating a fractured regulatory landscape. This threatens the very innovation AI was meant to create in a global race towards general AI. Today’s AI systems are examples of Artificial Narrow Intelligence, trained to perform specific tasks but are unable to operate outside their defined parameters. In contrast, Artificial General Intelligence, or Strong AI, is a theoretical form of AI capable of apply prior knowledge and skills to new contexts, enabling it to learn and perform any intellectual task a human can without additional human training of the underlying models. This pursuit has driven unprecedented investment, technology corporations have poured billions of dollars into AI capital expenditures with this number only continuing to rise. Compliance teams are left scrambling to manage an increasingly complex regulatory environment that is evolving faster than legal departments and regulators can effectively manage.

Regulation meets reality

The regulatory landscape for AI in 2025 represents a fragmented, rapidly evolving patchwork. The European Union (EU) leads with its new AI Act, the world first comprehensive AI law, which bans “unacceptable risks” within AI practices. Meanwhile, the United States continues to lack a clear approach to AI regulation.

Adding to this complexity, President Trump’s January 2025 Removing Barriers Executive Order (“EO”) rolled back on President Biden’s EO, with the Trump administration promising an AI policy focusing on enhancing national AI development and security. At the end of President Trump’s first term, the Office of Management and Budget’s (OMB) memorandum, “Guidance for Regulation of Artificial Intelligence Applications,” adopted a pro-innovation approach, warning that “[f]ederal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.” As of now, however, there is no comprehensive federal legislation or regulations in the United States designed to specifically regulate AI, thus leaving legal departments to navigate uncertain federal regulation.

The widening federal regulatory vacuum

The Trump administration’s lighter regulatory approach to AI, combined with a lack of consensus on comprehensive binding federal legislation, renders federal regulation unlikely in the foreseeable future. The administration’s July 2025 America’s AI Action Plan” reinforced a pro-innovation stance, stating “AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level.” This approach creates a philosophical divide with the EU’s risk-based approach and emerging state regulations across the nation. The Action Plan additionally proposed using federal funding leverage rather than comprehensive regulation to influence state AI policies. This approach maintains the fundamental compliance challenge facing corporations. They are required to navigate a variety of binding state regulations in an emerging technology field where federal policy guidance emphasizes innovation and deregulation but offers no uniform national framework to replace the state-by-state complexity. The result is a compliance environment where federal policy guidance fundamentally conflicts with state initiatives, heightening the jurisdictional complexity that federal legislation could simply address.

The state response to federal inaction

Following the maxim that “nature abhors a vacuum,” the absence of Washington policymakers taking extensive regulatory action has prompted states to accelerate their own AI legislation. The pace of state legislative activity is growing rapidly, with state legislatures collectively introducing their one-thousandth AI-related bill in April 2025, suggesting 2025 may surpass the unprecedented level of legislative activity the year prior. This approach to regulating an inherently international technology creates fundamental questions about the appropriate governance for AI systems amid the global race towards general AI. The underlying technology operates without regard to state boundaries, creating potential conflicts between competing state requirements that only federal legislation could sufficiently resolve.

While direct federal AI regulations still cease to exist, states scramble to fill this void. More than 40 states have introduced regulatory bills in legislative sessions, proposing or enacted legislation. Namely, Colorado enacted the “Colorado Artificial Intelligence Act” (CAIA), its first comprehensive AI legislation in 2024, which is set to take effect in early 2026.This makes Colorado only the second state to enact a major AI consumer protection law. The CAIA adopts a risk-based approach to growing AI legislation, sharing similarities to the EU’s AI Act, placing obligations and duties on developers and deployers of “high-risk AI systems.” However, the CAIA faces uncertainty, as evidenced by Governor Polis’s request to delay CAIA’s effective date until January 2027 due to stakeholder concerns about implementation, the failure of amendment bill SB 25-318, and ongoing calls from technology groups for a special legislative session to delay implementation.

Moreover, states are getting creative with their AI regulation. These regulations range from generative AI transparency to AI energy regulations. For example, California enacted AB 222, which addresses AI data centers and energy, while other states, including Illinois’s HB 3021 and Idaho’s HB 127, focus on AI specific applications, such as requiring chatbot providers to disclose non-human interactions or imposing liability for deceptive communications.

The compliance challenge

This fragmented approach creates significant compliance challenges. Companies must confirm their AI complies with federal and state regulatory requirements, which generally involves conducting audits for bias, data privacy, and transparency, especially in sectors such as healthcare and finance. Additionally, with developments outpacing needed regulations, states have utilized existing privacy, consumer protection, and anti-discrimination laws when addressing AI-related action. This approach means companies may face potential liability when existing state laws are applied.

Rapidly evolving AI regulations require corporations to make substantial investments including legal counsel and compliance technology to navigate the already complex landscape of existing regulations. Corporations must track state-by-state developments while also preparing for potential future federal regulatory challenges. This can be done by creating legal compliance strategies that work across multiple regulatory frameworks.

What started as a federal decision to essentially “stay out of” AI regulation is becoming a lesson of unintended consequences. A lack of comprehensive federal regulation has resulted in a state-by-state free for all that is not just a compliance headache but is also actively undermining the goal – innovation – it was designed to protect. Today, corporate legal departments must grapple with the challenges of the conflicting regulations of federal and state governments.