Tag:AI
Evolving Technology in Law Enforcement: Concerns and Solutions
Facial recognition technology has become widespread in consumer and commercial environments, and particularly law enforcement. Despite numerous benefits, these systems raise great concerns about privacy and data protection. The current legal frameworks are not strong enough to effectively manage the risks. No federal laws currently exist to regulate the use of facial recognition technology. Instead, enforcement is left to the states. Without aggressive state initiatives, use of facial recognition technology by law enforcement will continue unabated. This will result in data collection mired in algorithmic bias and will result in a complete disregard of civil liberties.
The Government’s Block of Anthropic and the Future of AI Procurement
Governments around the world have increasingly turned to artificial intelligence (AI) as a tool for defense and national security. In the United States, that shift has come with its share of conflict. In early 2026, a dispute between the federal government and AI company Anthropic came to a head after the Trump administration moved to bar the Pentagon from using Anthropic’s Claude software. At its core, the standoff exposed a tension that is only going to grow more common: tech companies that want to set limits on how their products are used, versus a government that sees those limits as a threat to its own capabilities. The Department of Defense had previously brought Claude into certain internal tools and workflows. But Anthropic’s restrictions on military use created friction with agencies that wanted broader access to the software. When those disagreements proved unresolvable, the administration granted agencies six months to stop using Anthropic products entirely, turning what had been a contract dispute into one of the more public clashes between Washington and a tech company in recent memory.
Synthetic Media, Real Harm: Regulating AI-Generated Deepfakes
Carolyn Nsimpasi
Associate Editor
Loyola University Chicago School of Law, JD 2026
The rapid advancement of artificial intelligence (AI) has enabled the creation of highly realistic synthetic media, commonly known as deepfakes. These AI-generated images, videos, and audio recordings can convincingly replicate real people, making it increasingly difficult to distinguish truth from fabrication. While deepfake technology has legitimate uses in entertainment, education, and accessibility, its growing misuse presents significant social, political, and ethical risks. As a result, the regulation of AI-generated deepfakes has become an urgent necessity.
To Compete Or Not to Compete: A Legal Question
Today, federal and state antitrust laws are as important as ever. However, modern courts struggle to apply the traditional interpretation and application of antitrust law to modern technology and related anti-competitive practices. This is particularly true in the realm of emerging technologies, where algorithms, automation, and artificial intelligence increasingly dominate. As a result, regulators face a host of unique challenges in an increasingly interconnected, data driven, and automated era. From business to finance, healthcare to housing, the importance of anti-competition law cannot be easily understated.
From Spreadsheets to Statutes: KPMG Enters into Law
The Arizona Supreme Court has approved the accounting firm Klynveld Peat Marwick Goerdeler (KPMG) to enter the practice of law. KMPG will be the first Big Four accounting firm to open its own law firm. This approval has created a stir in the legal community due to conflict and ethical compliance concerns. Although KPMG only has received approval in Arizona, there could be potential issues regarding conflicts, ethical challenges, and fair competition.
Generative AI- The Next Frontier in Fighting Financial Crime
Artificial intelligence (AI) is the latest tool in a financial institution’s arsenal to restrict the flow of money being channeled to fund illegal activities worldwide. As criminals get more innovative and sophisticated in using the latest technology to evade detection of their financial crimes, financial institutions must follow suit and utilize similar technology to root out these crimes or risk facing regulatory sanctions. Money laundering generally refers to financial transactions in which criminals, including terrorist organizations, attempt to disguise the proceeds of their illicit activities by making the funds appear to have come from a legitimate source. However, this is not a new phenomenon. Congress passed the Bank Secrecy Act (BSA) in 1970 to ensure financial institutions follow a set of guidelines known as KYC (Know Your Customer/Client) to detect and prevent money laundering through their systems.
AI Copyright Conundrum: An Evolving Legal Landscape
The objective of copyright law is to protect certain rights of a human author. But what happens when a nonhuman author creates something that is original, fixed, and has a minimal degree of creativity? Well, in the wild case of Naruto v. Slater, animals cannot have copyright protection in a “Monkey selfie.” As the technological world advances, the latest dispute that has everyone going bananas is AI and copyright protection. The Copyright Office will not register works “produced by a machine or mere mechanical process” such that there is no creative input from a human author because this kind of protection goes against the objective of copyright law.
AI Nancy Drew, Is That You?
The United States spends more money per person on health care than any other country, approximately $4.2 trillion in 2021. Unfortunately, our complex health care system and the large budget make fraud a significant concern for the U.S. Government, payers, and patients. The National Healthcare Anti-Fraud Association estimates that as much as 10% of annual healthcare spending is lost to scams, resulting in billions in losses yearly. To combat healthcare fraud, the Department of Health and Human Services Office of the Inspector General, in collaboration with state law enforcement and other governmental agencies has created special Strike Forces. These efforts have led to substantial recoveries of federal funds and criminal/civil prosecution of individuals or entities involved in Medicare and Medicaid fraud. Besides avoiding unnecessary or fraudulent claims, individual healthcare payers are motivated to prevent fraud due to severe penalties associated with the False Claims Act, Anti-Kickback Statute, Physician Self-Referral Law (Stark Law), and Civil Monetary Penalties Law. How can individual payers detect and try to prevent fraud? The answer is AI.
The I.R.S. is using AI to Crack Down on Tax Evasion
The Internal Revenue Service (I.R.S.) issued a press release on September 8, 2023, detailing how the agency plans to use at least part of the $80 million dollar allocation it received from the Inflation Reduction Act last year. I.R.S. Commissioner Danny Werfel plans to use the funds to make compliance enforcement efforts and tax evasion identification more effective and efficient. How does he plan to do this? The overwhelmed and perhaps overworked agency will be using artificial intelligence (AI) programs and features to expedite and assist with redundant processes as well as to audit parties that are too complicated or large for the I.R.S.’s current capabilities.
Healthcare’s Red and Blue Pill: AI
Artificial Intelligence (AI) has gained widespread attention, often perceived as a buzzword. Recently, concerns about its potential dangers and issues with plagiarism have surfaced. However, AI holds immense promise for transforming industries reliant on data analysis and predictive algorithms, especially in healthcare. AI can significantly improve healthcare by aiding in diagnosis, optimizing patient outcomes, reducing costs, and saving time.