The objective of copyright law is to protect certain rights of a human author. But what happens when a nonhuman author creates something that is original, fixed, and has a minimal degree of creativity? Well, in the wild case of Naruto v. Slater, animals cannot have copyright protection in a “Monkey selfie.” As the technological world advances, the latest dispute that has everyone going bananas is AI and copyright protection. The Copyright Office will not register works “produced by a machine or mere mechanical process” such that there is no creative input from a human author because this kind of protection goes against the objective of copyright law.
The United States spends more money per person on health care than any other country, approximately $4.2 trillion in 2021. Unfortunately, our complex health care system and the large budget make fraud a significant concern for the U.S. Government, payers, and patients. The National Healthcare Anti-Fraud Association estimates that as much as 10% of annual healthcare spending is lost to scams, resulting in billions in losses yearly. To combat healthcare fraud, the Department of Health and Human Services Office of the Inspector General, in collaboration with state law enforcement and other governmental agencies has created special Strike Forces. These efforts have led to substantial recoveries of federal funds and criminal/civil prosecution of individuals or entities involved in Medicare and Medicaid fraud. Besides avoiding unnecessary or fraudulent claims, individual healthcare payers are motivated to prevent fraud due to severe penalties associated with the False Claims Act, Anti-Kickback Statute, Physician Self-Referral Law (Stark Law), and Civil Monetary Penalties Law. How can individual payers detect and try to prevent fraud? The answer is AI.
The Internal Revenue Service (I.R.S.) issued a press release on September 8, 2023, detailing how the agency plans to use at least part of the $80 million dollar allocation it received from the Inflation Reduction Act last year. I.R.S. Commissioner Danny Werfel plans to use the funds to make compliance enforcement efforts and tax evasion identification more effective and efficient. How does he plan to do this? The overwhelmed and perhaps overworked agency will be using artificial intelligence (AI) programs and features to expedite and assist with redundant processes as well as to audit parties that are too complicated or large for the I.R.S.’s current capabilities.
Artificial Intelligence (AI) has gained widespread attention, often perceived as a buzzword. Recently, concerns about its potential dangers and issues with plagiarism have surfaced. However, AI holds immense promise for transforming industries reliant on data analysis and predictive algorithms, especially in healthcare. AI can significantly improve healthcare by aiding in diagnosis, optimizing patient outcomes, reducing costs, and saving time.
Since ChatGPT became public in November 2022, it has created questions for employers about how to incorporate the tool into workplace policies and best maintain compliance with government regulations. This artificial intelligence language platform, that is trained to interact conversationally and perform tasks, raises issues regarding intellectual property risks, inherent bias, data protection, and misleading content.
In November of last year, Open AI launched ChatGPT, an AI chatbot that engages users with dialogue to answer questions, write responses to prompts, and interacts with the user. Google quickly responded to the technological advancement by creating their own version of a chatbot called Bard that Google claims will draw “on information from the web to provide fresh, high-quality response.” AI has quickly embedded itself into most everyday activities. Additionally, in light of recent mass layoffs, experts predict that AI could displace tens of millions of jobs in the United States in the coming years. And with new chatbots run by AI, AIs have gone from simply replacing people to assisting people with tasks. As with all emerging technology, the general public may worry about regulating something that can be so intrusive and yet powerful and helpful to society. With the unsurmountable amount of knowledge provided by AI in seconds, it is necessary that Congress catch up to the emerging technology and create regulations for AI that can respect intellectual property and copyright laws and eradicate how AI adds to racial and gender disparities in the United States.
Until recently, Artificial Intelligence (AI) was the domain of science fiction connoisseurs and Silicon Valley tech savants. Now, AI is ubiquitous in our daily lives, with a seemingly endless number of possible applications. As with any new and emerging technology, there are many novel questions and concerns that need to be addressed. Whether it be related to copyright ownership, ethics, cybersecurity obstacles, or discrimination and bias, concerns surrounding AI usage are mounting. AI system regulation has been rapidly increasing worldwide, while the U.S. regulatory landscape has remained relatively sparse. But it won’t be for long.
The use of facial recognition technology in the commercial context generates numerous consumer privacy concerns. As technology becomes increasingly present in many aspects of our life, regulations on states and federal level are struggling to catch up. Currently, only three states (Illinois, Washington, and Texas) implemented biometric privacy laws, and only Illinois grants individuals with a private right of action.
Artificial intelligence is all around us. Whether it exists in your iPhone as “Siri” or in complex machines that are detecting diabetic retinopathy, it is constantly growing and becoming a regular part of the modern day. As with any new technology, regulation surrounding artificial intelligence is becoming increasingly problematic. The question facing us now is how do we encourage further development without accidentally hindering its growth? Recently, the Food and Drug Administration has attempted to take steps toward further regulation of artificial intelligence by introducing a review process for medical artificial intelligence. This is just one instance of how regulation may affect the evolution of artificial intelligence.
From Siri to Alexa, to deep learning algorithms, artificial intelligence (AI) has now become commonplace in most peoples’ lives. In a business context, AI has become an indispensable tool for businesses to utilize in accomplishing their goals. Due to the complexity of the algorithms required to make quick and complex decisions, a “black box problem” has emerged for those who utilize these increasingly more elaborate forms of AI. The “black box” simply refers to the level of opacity that shrouds the AI decision-making process. While no current regulation explicitly bans or restricts the use of AI in decision making processes, many tech experts argue that the black box of AI needs to be opened in order to deconstruct not only the technically intricate decision-making capabilities of AI, but the possible compliance-related problems this type of technology may cause.