Adelyn Schmidt
Associate Editor
Loyola University Chicago School of Law, JD 2027
In the absence of comprehensive federal artificial intelligence (AI) legislation, states have moved aggressively to regulate AI. Beginning January 1, 2026, several major state AI laws imposed new safety and accountability obligations on AI developers. Just weeks before those laws took effect, President Trump’s Administration issued an executive order signaling a shift toward federal deregulation and preemption. The result is a looming conflict between state enforcement and federal resistance that is likely to continue to define the United States AI regulation in 2026.
The rise of state AI regulation
With Congress slow to act, states have stepped in to address perceived risks associated with AI, including discrimination, consumer deception and catastrophic misuse. Notably, California’s Transparency in Frontier Artificial Intelligence Act (TFAIA) targets “frontier developers” that train extremely large foundation models. Frontier developers are entities that train or deploy the most powerful AI models—often called frontier or foundation models—using enormous computing resources and capable of general-purpose applications across industries. These models are distinct from ordinary AI systems because their scale and autonomy raise heightened concerns about misuse, loss of control, and large-scale harm.
The TFAIA also requires developers to report “critical safety incidents,” such as unauthorized access to model weights or harms caused by system failures. Model weights are the internal parameters of an AI model that determine how it processes inputs and generates outputs; access to these weights can allow bad actors to replicate, manipulate, or weaponize a model. Unauthorized disclosure of model weights therefore raises risks of large-scale misuse, including the creation of unsafe or unregulated derivative models. System failures refer to breakdowns in technical or human controls that cause an AI system to behave in unintended or unsafe ways, such as bypassing safeguards, producing harmful outputs, or operating without effective human oversight.
These obligations are layered on top of several other California AI laws taking effect around the same time. Those laws include requirements related to training-data transparency, watermarking and detection tools, healthcare disclosures, chatbot safety, and algorithmic price-fixing. Taken together, California’s framework reflects a comprehensive attempt to regulate AI development and deployment. It also imposes significant compliance burdens on large developers operating in the state.
Texas’s Responsible Artificial Intelligence Governance Act (RAIGA) takes a different approach. Rather than focusing on model size or capabilities, the law broadly applies to developers and deployers operating in or affecting Texas. RAIGA prohibits the intentional creation or use of AI for restricted purposes, including unlawful discrimination, self-harm facilitation, child exploitation, and constitutional violations. Liability under the statute turns on intent, not misuse. The law provides affirmative defenses for entities that follow recognized risk-management frameworks, such as NIST’s AI Risk Management Framework, and establishes a regulatory sandbox allowing limited experimentation without enforcement risk.
Other states have followed suit. Colorado’s AI Act imposes a “reasonable care” standard to prevent algorithmic discrimination in high-risk AI systems. Illinois amended its Human Rights Act to prohibit discriminatory employer use of AI technologies. New York and California have also enacted laws addressing AI-driven price-setting in housing and other markets. Together, these measures create a rapidly expanding, and increasingly fragmented, AI regulatory landscape.
The Trump administration’s executive order: a federal countermove
On December 11, 2025, the Trump Administration issued an executive order titled Ensuring a National Policy Framework for Artificial Intelligence. The Order declares that maintaining global AI dominance requires a “minimally burdensome” national regulatory framework. It characterizes state AI regulation as a threat to innovation, interstate commerce, and constitutional rights. Rather than offering a comprehensive federal alternative, the Order signals an intention to resist state-level regulation. In doing so, it sets the stage for direct federal-state conflict.
The Executive Order directs multiple federal agencies to act. It establishes the AI Litigation Task Force within the Department of Justice (DOJ) to challenge state AI laws on preemption, the Commerce Clause, and other constitutional grounds. The Secretary of Commerce is instructed to identify onerous state AI laws, particularly those requiring altered outputs or compelled disclosures. The Order also calls for a Federal Trade Commission (FTC) policy statement addressing when state AI mandates may be preempted as deceptive practices. Administration officials, including White House Special Advisor for AI and Crypto David Sacks, have suggested that laws in California, Colorado, Illinois, and New York are primary targets.
What the executive order does, and does not do
Importantly, the Executive Order does not invalidate existing state AI laws. Those statutes remain enforceable unless amended, repealed, or struck down through litigation. For now, companies must continue complying with applicable state requirements, even as the federal government signals its intent to challenge them. The Order also preserves certain categories of state regulation, including child safety, AI infrastructure, and state procurement. These carve-outs limit the immediate scope of federal interference.
Nevertheless, many AI-specific consumer protection and transparency laws fall squarely within the areas flagged for federal scrutiny. The threat of preemption alone creates uncertainty for regulated entities. Companies may hesitate to invest in compliance systems that could be rendered irrelevant through litigation or federal action. As a result, the Order introduces instability without offering clear regulatory replacement.
Sector-specific implications
The uncertainty created by the Executive Order will not be evenly distributed across industries. Healthcare companies will remain subject to state medical-practice and privacy laws. However, they may benefit from the eventual adoption of a uniform federal disclosure standard. Unlike healthcare companies, Fintech firms (short for financial technology) are already heavily regulated at the federal level and may experience fewer direct impacts. Even so, state though consumer-protection and privacy laws will continue to apply.
Housing and pricing algorithms are likely to remain a major flashpoint. State bans on AI-driven price-setting have already faced First Amendment and antitrust challenges. The Executive Order may accelerate those disputes by encouraging federal involvement. Until courts provide clarity, companies operating in these sectors face heightened litigation risk.
Federal uniformity or regulatory whiplash?
The Executive Order also calls for congressional action. In December 2025, Senator Marsha Blackburn introduced the TRUMP AMERICA AI Act, which would codify the Administration’s deregulatory approach and establish a single national AI framework. Additional proposals are expected in 2026. While the goal of a uniform federal AI framework is sensible, the Executive Order is premature and destabilizing. State AI laws currently provide the only enforceable guardrails addressing discrimination, deception, and catastrophic misuse.
By threatening preemption before Congress enacts a meaningful replacement framework, the Administration risks creating regulatory gaps rather than reducing compliance burdens. For companies, this uncertainty shifts costs toward litigation risk, contingency planning, and enforcement unpredictability. Until a comprehensive federal statute is enacted, state AI laws remain essential. Undermining them without a substitute is likely to hinder, not help, responsible AI innovation in the near term.