Tag:Privacy & Security
Breaching the Last Bastion of the Human Psyche: Neural Data as Biometrics
Earlier this year, the New York Times reported on the proposed Colorado Privacy Act and the impact it would have on neurotechnology which uses “neural data” and already has noteworthy support within programming communities. What the Colorado Privacy Act aims to address are not the labs and medical studies conducted within clinics, but how it may be used within a consumer context. The Colorado Privacy Act does more than Illinois’ pioneer Biometric Information Protection Act (BIPA).
Generative AI- The Next Frontier in Fighting Financial Crime
Artificial intelligence (AI) is the latest tool in a financial institution’s arsenal to restrict the flow of money being channeled to fund illegal activities worldwide. As criminals get more innovative and sophisticated in using the latest technology to evade detection of their financial crimes, financial institutions must follow suit and utilize similar technology to root out these crimes or risk facing regulatory sanctions. Money laundering generally refers to financial transactions in which criminals, including terrorist organizations, attempt to disguise the proceeds of their illicit activities by making the funds appear to have come from a legitimate source. However, this is not a new phenomenon. Congress passed the Bank Secrecy Act (BSA) in 1970 to ensure financial institutions follow a set of guidelines known as KYC (Know Your Customer/Client) to detect and prevent money laundering through their systems.
Reproductive Health Data Privacy – A Right To Life
Following the Supreme Court decision to overturning Roe v. Wade on June 24, 2022, the Dobbs v. Jackson Women’s Health Organization ruling that gutted the long-established right to an abortion has been a constant focus, both inside and outside of the legal and healthcare communities. Notably, the ruling has remained a central focus within both the government, federal and state, and surrounding the tech sector. And these Dobbs-related conversations have a theme – the topic of health data privacy. But more specifically, discussions about data privacy surrounding reproductive healthcare.
Legal Risks to Employers when Employees use ChatGPT
Since ChatGPT became public in November 2022, it has created questions for employers about how to incorporate the tool into workplace policies and best maintain compliance with government regulations. This artificial intelligence language platform, that is trained to interact conversationally and perform tasks, raises issues regarding intellectual property risks, inherent bias, data protection, and misleading content.
The Committee on Foreign Investment of the U.S. Cracks Down on Tiktok: Is a Potential Ban on the Horizon?
Since 2019, TikTok and ByteDance, its parent company, officials have been negotiating with the Committee on Foreign Investment of the United States (CFIUS) regarding required technical safeguards they will need to adopt to be in compliance with US national security concerns. The popular social media app, which gained traction during the beginning of 2020 amidst the Covid-19 pandemic, has been scrutinized by many officials regarding concerns for user privacy. Currently, the Biden administration has been working to encourage TikTok’s Chinese owners to sell their investment in the app or face a potential national ban in the U.S.. However, Tiktok representatives argue this will not alleviate concerns about user data privacy.
AI-ming for Better Healthcare: Legal Issues in Healthcare AI Usage
Artificial intelligence (AI) is a simulation of human intelligence that is subsequently processed by machines. It has revolutionized the healthcare space by improving patient outcomes in a variety of ways. It has also begun to leave a positive impact in health systems and hospitals as healthcare worker burnout remains on the rise. However, there are significant legal challenges that accompany its groundbreaking nature. Hospitals and health systems have a duty to mitigate these legal challenges and understand that AI should be used as a supplement, not a replacement, to human intelligence.
Kraken Settles with the SEC in a $30 Million Deal
Sophie Shapiro Associate Editor Loyola University Chicago School of Law, JD 2024 Kraken will pay $30 million to settle SEC (Securities and Exchange Commission) allegations that it broke the agency’s rules with its cryptoasset staking products and will discontinue them in the United States as part of the agreement with the regulator. What is Kraken? …
Read more
ChatGPT Artificial Intelligence: Cybersecurity Risks and Ethical Concerns
From “Fake news” to misinformation and Bots; it has become overwhelmingly challenging to authenticate information on the internet. This has not stopped the evolution of technology as innovators compete to be on the cutting edge of the latest software. OpenAI is an artificial research and deployment company that is responsible for the launch of ChatGPT in November of 2022. The newly released artificial intelligence chatbot is trained to generate realistic and convincing text. The software was fed human literature and internet language enabling it to create a body of text within the parameters of the prompt presented. With more than 1 million users, it has gained traction across the masses. However, the natural language processor has sparked controversy over cybersecurity threats and ethical concerns in its usage.
Kidnapped Data: Healthcare Ransomware Attacks
Ransomware attacks are one of the largest threats to the healthcare industry and a tough cybersecurity problem to address. From 2016-2021, there were almost 400 ransomware attacks on healthcare organizations in the US. It is estimated that such attacks exposed the personal healthcare data of over 40 million patients. Since these attacks cannot typically be resolved without paying the ransom, it is important to invest in preventative measures to protect healthcare data from potential breach.
The Case for Expanding Privacy Protections in a Post-Roe World
In Dobbs v. Jackson Women’s Health Organization (Dobbs), the US Supreme Court ruled that abortion is not a fundamental right protected by the Constitution. This decision resulted in additional abortion protections in California, Michigan, and Vermont, and prompted many patients, providers, regulators, and tech companies to rethink data privacy. However, because most abortions are still banned in at least 13 states, this patchwork of state abortion laws, combined with the lack of any sufficient national privacy law, puts patient privacy at risk.