Category:Uncategorized
A US Data Privacy Law That Bites, Hopefully
Despite industry groups’ and tech companies’ numerous efforts over the past few months to water down and ultimately halt the first-ever U.S. data privacy law, the California Consumer Privacy Act of 2018 (“CCPA” or “the Act”), the CCPA now has its final language set on September 13, 2019, the end of California’s legislative calendar, and will go into effect on January 1, 2020. The goal is to give California residents control of their personal information collected and processed by companies.
Reflecting on an Excellent 2018-2019 Academic Year
We are coming up on the end of the 2018-2019 academic year and, for myself and my fellow 3Ls, graduation! I want to take this opportunity to thank the members of our journal, our authors, our faculty advisers, and our readers for their continued support.
Removing “Incident to” Billing: Recommendations from Policy Experts
Earlier this year, the Medicare Payment Advisory Commission (MedPAC) uninamously voted to recommend removing “incident to” Medicare billing for advanced practice registered nurses (APRNs) and physician assistants (PAs). MedPAC serves as an independent congressional agency that advises Congress on Medicare-related issues by analyzing access and quality of care. If this recommendation is adopted, APRNs and PAs would only be able to bill Medicare directly, thus reducing the amount paid by Medicare from 100% under services billed “incident to” to 85% directly. This recommendation could potentially save the Medicare program up to $250 million annually and would allow for better data collection into the amount of services performed by APRNs and PAs, whose services are often masked under “incident to” billing reports. Though there is still some debate on whether the financial loss of losing this option is too high for primary physicians who may hire APRNs and PAs for their practice, the benefits of billing directly likely outweigh the losses.
Regulating Artificial Intelligence – Is It Possible?
Artificial intelligence is all around us. Whether it exists in your iPhone as “Siri” or in complex machines that are detecting diabetic retinopathy, it is constantly growing and becoming a regular part of the modern day. As with any new technology, regulation surrounding artificial intelligence is becoming increasingly problematic. The question facing us now is how do we encourage further development without accidentally hindering its growth? Recently, the Food and Drug Administration has attempted to take steps toward further regulation of artificial intelligence by introducing a review process for medical artificial intelligence. This is just one instance of how regulation may affect the evolution of artificial intelligence.
The FDCA and Cosmetics Enforcement: Better Late than Never
In March 2019, the FDA issued a statement explaining that asbestos was found in certain cosmetic products sold at retail stores Claire’s and Justice. The Food, Drug, and Cosmetics Act (FDCA) has always granted the FDA similar authority to monitor cosmetic products for adulteration or misbranding as it does food. However, litigation in this area was notably silent. The FDA’s change in position on its authority is long overdue.
Exploring COPPA through the FTC’s Complaint against TikTok
The Children’s Online Privacy Protection Act (“COPPA”) prohibits unfair or deceptive collection, use, and disclosure of the personal information of children on the internet. COPPA covers both website operators and app developers, and prevents collection of personal information without verified, written consent of parents. On February 27, 2019, the Federal Trade Commission (“FTC”) filed a complaint in U.S. District Court against TikTok, previously known as Music.ly. The complaint alleged that Music.ly knowingly violated COPPA when it collected data from children without written consent of parents. Music.ly settled for $5,700,000.00, the largest civil penalty obtained by the FTC for violations of COPPA.
FCPA Establishes Corporate Regulation of Text Messaging Apps
On March 12, 2019, the Department of Justice (“DOJ”) announced revisions of the Corporate Enforcement Policy in the Foreign Corrupt Practices Act. The changes now require company oversight of ephemeral messaging apps used by any employee, stock holder, or agent who discusses business records via the messaging platform. Publicly traded companies must now establish internal compliance policies to review use of ephemeral messaging services, provide ongoing oversight of the messaging services, and may want to completely prohibit the use of such messaging apps for business purposes.
Regulating the Un-Explainable: The Difficulties in Regulating Artificial Intelligence
From Siri to Alexa, to deep learning algorithms, artificial intelligence (AI) has now become commonplace in most peoples’ lives. In a business context, AI has become an indispensable tool for businesses to utilize in accomplishing their goals. Due to the complexity of the algorithms required to make quick and complex decisions, a “black box problem” has emerged for those who utilize these increasingly more elaborate forms of AI. The “black box” simply refers to the level of opacity that shrouds the AI decision-making process. While no current regulation explicitly bans or restricts the use of AI in decision making processes, many tech experts argue that the black box of AI needs to be opened in order to deconstruct not only the technically intricate decision-making capabilities of AI, but the possible compliance-related problems this type of technology may cause.
“Grounding”: Federal Regulation in the Context of Aircraft Suspensions
On March 10, 2019, Ethiopian Airlines Flight 302 en route to Nairobi, Kenya crashed shortly after take-off leaving no survivors. It became the carrier’s most deadly crash and its first fatal crash since January 2010. Most notably, however, it was the second fatal crash involving Boeing’s new 737 MAX jet in less than five months after the Lion Air Flight 610 accident in October 2018. The day following the tragedy, Ethiopian Airlines grounded all of its Boeing 737 MAX 8 fleet until further notice. Many other airlines suspended operations of the aircraft as well and countless countries banned the 737 MAX from airspace.
Protecting PHI as the Health Care Industry Promotes Shift to TeleMedicine
In a time where much of the healthcare industry has shifted to incorporate telehealth and telemedicine, health care organizations and providers are faced with the upkeep of the growing influx of patient data and the challenges associated with their obligation to maintain patient privacy. These challenges increasingly more burdensome as providers strive to keep up to date with the advancement of technology. Healthcare organizations must maintain patient privacythrough close monitoring of clouds, employee use of mobile devices, patient access to medical information and scheduling, and access to the provider networks through non-organizational devices. Maintaining the multiple platforms is costly and the industry remains at risk due to the rising volumes of cybersecurity attacks and breaches. UConn Health recently experienced a data breachthat necessitated notifying 326,000 people of potential impact to their protected health information (PHI) including names, dates of birth, address, billing information, and even social security numbers due to potential access by an unauthorized person.