Category:

Artificial Intelligence

Come On Down: Dynamic Pricing Is the New Price Tag

Retail pricing is undergoing a significant technological shift. Instead of relying on fixed price tags, many businesses now use dynamic pricing systems that adjust prices automatically based on real-time data. These systems analyze factors such as demand, competitor pricing, inventory levels, and consumer behavior to determine what price to display at a given moment. Dynamic pricing is already prevalent in many industries, such as live entertainment, airlines, hotels, and ride-sharing platforms, all of which routinely adjust prices in response to changing demand. Increasingly, retailers and e-commerce platforms are adopting similar strategies in everyday consumer markets. As this practice expands, regulators are evaluating how existing consumer protection, antitrust, and data privacy laws apply to algorithm-driven pricing models.

The Government’s Block of Anthropic and the Future of AI Procurement

Governments around the world have increasingly turned to artificial intelligence (AI) as a tool for defense and national security. In the United States, that shift has come with its share of conflict. In early 2026, a dispute between the federal government and AI company Anthropic came to a head after the Trump administration moved to bar the Pentagon from using Anthropic’s Claude software. At its core, the standoff exposed a tension that is only going to grow more common: tech companies that want to set limits on how their products are used, versus a government that sees those limits as a threat to its own capabilities. The Department of Defense had previously brought Claude into certain internal tools and workflows. But Anthropic’s restrictions on military use created friction with agencies that wanted broader access to the software. When those disagreements proved unresolvable, the administration granted agencies six months to stop using Anthropic products entirely, turning what had been a contract dispute into one of the more public clashes between Washington and a tech company in recent memory.

Will AI Replace Compliance Professionals?

The rapid development of automation, artificial intelligence (AI), and Regulatory Technology (RegTech) has officially begun transforming the regulatory compliance landscape. Financial institutions and corporations are facing an increasingly complex web of regulations, rising compliance costs, and growing expectations from regulators. In response, organizations are turning to automated systems to streamline monitoring, reporting, and risk management. These technological advancements have sparked an important question within the industry: will automation eventually replace compliance professionals? While automation is reshaping the compliance function and technology is transforming the profession by automating routine tasks, there may still be a need for human oversight, interpretation, and strategic decision-making after all.

AI Data Centers and Rising Electric Bills

Electric bills are rising in many places, and the rapid expansion of AI data centers is adding new pressure to the power system. The big issue is how the electric grid pays for the infrastructure needed to serve rapidly growing electricity demand tied to AI. Serving that demand can require costly upgrades to the electric grid as well as securing additional electricity supply. When those costs are recovered through broadly applied rates instead of being assigned to the large new loads that triggered them, residential customers can see higher bills. State commissions and federal regulators influence these outcomes through tariffs, cost-allocation rules, and market design. As AI electricity use accelerates, questions of fairness and reliability have moved to the forefront of energy regulation.

Trump’s Executive Order Signals Federal Disruption for New State AI Laws

In the absence of comprehensive federal artificial intelligence (AI) legislation, states have moved aggressively to regulate AI. Beginning January 1, 2026, several major state AI laws imposed new safety and accountability obligations on AI developers. Just weeks before those laws took effect, President Trump’s Administration issued an executive order signaling a shift toward federal deregulation and preemption. The result is a looming conflict between state enforcement and federal resistance that is likely to continue to define the United States AI regulation in 2026.

Synthetic Media, Real Harm: Regulating AI-Generated Deepfakes

Carolyn Nsimpasi
Associate Editor
Loyola University Chicago School of Law, JD 2026
The rapid advancement of artificial intelligence (AI) has enabled the creation of highly realistic synthetic media, commonly known as deepfakes. These AI-generated images, videos, and audio recordings can convincingly replicate real people, making it increasingly difficult to distinguish truth from fabrication. While deepfake technology has legitimate uses in entertainment, education, and accessibility, its growing misuse presents significant social, political, and ethical risks. As a result, the regulation of AI-generated deepfakes has become an urgent necessity.

To Compete Or Not to Compete: A Legal Question

Today, federal and state antitrust laws are as important as ever. However, modern courts struggle to apply the traditional interpretation and application of antitrust law to modern technology and related anti-competitive practices. This is particularly true in the realm of emerging technologies, where algorithms, automation, and artificial intelligence increasingly dominate. As a result, regulators face a host of unique challenges in an increasingly interconnected, data driven, and automated era. From business to finance, healthcare to housing, the importance of anti-competition law cannot be easily understated.

Work Related: AI Governance and Regulation in the Employment Law Context

Today, an explosion in Artificial Intelligence (AI) development is taking the U.S. and global economic systems by storm. Companies like Nvidia (the first company to reach an approximately 5 trillion valuation), Microsoft, Alphabet (Google), and Open AI (formerly a non-profit which still cites the common good as a core tenant of its charter) have kicked off what is widely understood to be an AI “Arms Race.” Investors- from venture capitalists to private equity behemoths- continue to pour billions of dollars into AI technology companies and associated ventures. As AI companies move from beta testing to widespread adoption and integration, debates on AI transparency, accountability, and regulation have risen to the forefront. As a result of this monumental shift and ongoing uncertainty, the necessity of properly understanding (and regulating) AI and automation technology is now more pressing than ever before. Further, the need for strong regulatory oversight, a broad regulatory consensus and clear guidance, a baseline code of ethics (at minimum), as well as strong federal and state regulation- has become one of the most important issues of our time.

America’s Fractured Approach to AI Regulation

Federal efforts to promote artificial intelligence (“AI”) innovation by avoiding comprehensive regulation has prompted state legislatures to fill the regulatory void, creating a fractured regulatory landscape. This threatens the very innovation AI was meant to create in a global race towards general AI. Today’s AI systems are examples of Artificial Narrow Intelligence, trained to perform specific tasks but are unable to operate outside their defined parameters. In contrast, Artificial General Intelligence, or Strong AI, is a theoretical form of AI capable of apply prior knowledge and skills to new contexts, enabling it to learn and perform any intellectual task a human can without additional human training of the underlying models. This pursuit has driven unprecedented investment, technology corporations have poured billions of dollars into AI capital expenditures with this number only continuing to rise. Compliance teams are left scrambling to manage an increasingly complex regulatory environment that is evolving faster than legal departments and regulators can effectively manage.

Will AI Make Trade Secrets No Longer Secret?

Most companies own valuable trade secrets, such as the recipe for Coca-Cola or Google’s algorithm. But can a company that develops AI have trade secrets? The Uniform Trade Secrets Act defines a trade secret as “information, including a formula, pattern, compilation, program, device, method, technique, or process,” that derives economic value and is the subject of efforts to maintain its secrecy. The protection of trade secrets is essential for companies to maintain their competitive edge and drive economic growth. As such, they are instrumental in both corporate governance and compliance. Companies already deal with the risks of employees using generative artificial intelligence (AI) and exposing trade secrets; however, recent AI regulations in Europe and the United States have further increased risks relating to trade secrets.