Tag:AI
Synthetic Media, Real Harm: Regulating AI-Generated Deepfakes
Carolyn Nsimpasi
Associate Editor
Loyola University Chicago School of Law, JD 2026
The rapid advancement of artificial intelligence (AI) has enabled the creation of highly realistic synthetic media, commonly known as deepfakes. These AI-generated images, videos, and audio recordings can convincingly replicate real people, making it increasingly difficult to distinguish truth from fabrication. While deepfake technology has legitimate uses in entertainment, education, and accessibility, its growing misuse presents significant social, political, and ethical risks. As a result, the regulation of AI-generated deepfakes has become an urgent necessity.
To Compete Or Not to Compete: A Legal Question
Today, federal and state antitrust laws are as important as ever. However, modern courts struggle to apply the traditional interpretation and application of antitrust law to modern technology and related anti-competitive practices. This is particularly true in the realm of emerging technologies, where algorithms, automation, and artificial intelligence increasingly dominate. As a result, regulators face a host of unique challenges in an increasingly interconnected, data driven, and automated era. From business to finance, healthcare to housing, the importance of anti-competition law cannot be easily understated.
From Spreadsheets to Statutes: KPMG Enters into Law
The Arizona Supreme Court has approved the accounting firm Klynveld Peat Marwick Goerdeler (KPMG) to enter the practice of law. KMPG will be the first Big Four accounting firm to open its own law firm. This approval has created a stir in the legal community due to conflict and ethical compliance concerns. Although KPMG only has received approval in Arizona, there could be potential issues regarding conflicts, ethical challenges, and fair competition.
Generative AI- The Next Frontier in Fighting Financial Crime
Artificial intelligence (AI) is the latest tool in a financial institution’s arsenal to restrict the flow of money being channeled to fund illegal activities worldwide. As criminals get more innovative and sophisticated in using the latest technology to evade detection of their financial crimes, financial institutions must follow suit and utilize similar technology to root out these crimes or risk facing regulatory sanctions. Money laundering generally refers to financial transactions in which criminals, including terrorist organizations, attempt to disguise the proceeds of their illicit activities by making the funds appear to have come from a legitimate source. However, this is not a new phenomenon. Congress passed the Bank Secrecy Act (BSA) in 1970 to ensure financial institutions follow a set of guidelines known as KYC (Know Your Customer/Client) to detect and prevent money laundering through their systems.
AI Copyright Conundrum: An Evolving Legal Landscape
The objective of copyright law is to protect certain rights of a human author. But what happens when a nonhuman author creates something that is original, fixed, and has a minimal degree of creativity? Well, in the wild case of Naruto v. Slater, animals cannot have copyright protection in a “Monkey selfie.” As the technological world advances, the latest dispute that has everyone going bananas is AI and copyright protection. The Copyright Office will not register works “produced by a machine or mere mechanical process” such that there is no creative input from a human author because this kind of protection goes against the objective of copyright law.
AI Nancy Drew, Is That You?
The United States spends more money per person on health care than any other country, approximately $4.2 trillion in 2021. Unfortunately, our complex health care system and the large budget make fraud a significant concern for the U.S. Government, payers, and patients. The National Healthcare Anti-Fraud Association estimates that as much as 10% of annual healthcare spending is lost to scams, resulting in billions in losses yearly. To combat healthcare fraud, the Department of Health and Human Services Office of the Inspector General, in collaboration with state law enforcement and other governmental agencies has created special Strike Forces. These efforts have led to substantial recoveries of federal funds and criminal/civil prosecution of individuals or entities involved in Medicare and Medicaid fraud. Besides avoiding unnecessary or fraudulent claims, individual healthcare payers are motivated to prevent fraud due to severe penalties associated with the False Claims Act, Anti-Kickback Statute, Physician Self-Referral Law (Stark Law), and Civil Monetary Penalties Law. How can individual payers detect and try to prevent fraud? The answer is AI.
The I.R.S. is using AI to Crack Down on Tax Evasion
The Internal Revenue Service (I.R.S.) issued a press release on September 8, 2023, detailing how the agency plans to use at least part of the $80 million dollar allocation it received from the Inflation Reduction Act last year. I.R.S. Commissioner Danny Werfel plans to use the funds to make compliance enforcement efforts and tax evasion identification more effective and efficient. How does he plan to do this? The overwhelmed and perhaps overworked agency will be using artificial intelligence (AI) programs and features to expedite and assist with redundant processes as well as to audit parties that are too complicated or large for the I.R.S.’s current capabilities.
Healthcare’s Red and Blue Pill: AI
Artificial Intelligence (AI) has gained widespread attention, often perceived as a buzzword. Recently, concerns about its potential dangers and issues with plagiarism have surfaced. However, AI holds immense promise for transforming industries reliant on data analysis and predictive algorithms, especially in healthcare. AI can significantly improve healthcare by aiding in diagnosis, optimizing patient outcomes, reducing costs, and saving time.
Legal Risks to Employers when Employees use ChatGPT
Since ChatGPT became public in November 2022, it has created questions for employers about how to incorporate the tool into workplace policies and best maintain compliance with government regulations. This artificial intelligence language platform, that is trained to interact conversationally and perform tasks, raises issues regarding intellectual property risks, inherent bias, data protection, and misleading content.
The Rise of AI: Why Congress Must Regulate Artificial Intelligence Before it is too Late
In November of last year, Open AI launched ChatGPT, an AI chatbot that engages users with dialogue to answer questions, write responses to prompts, and interacts with the user. Google quickly responded to the technological advancement by creating their own version of a chatbot called Bard that Google claims will draw “on information from the web to provide fresh, high-quality response.” AI has quickly embedded itself into most everyday activities. Additionally, in light of recent mass layoffs, experts predict that AI could displace tens of millions of jobs in the United States in the coming years. And with new chatbots run by AI, AIs have gone from simply replacing people to assisting people with tasks. As with all emerging technology, the general public may worry about regulating something that can be so intrusive and yet powerful and helpful to society. With the unsurmountable amount of knowledge provided by AI in seconds, it is necessary that Congress catch up to the emerging technology and create regulations for AI that can respect intellectual property and copyright laws and eradicate how AI adds to racial and gender disparities in the United States.