Until recently, Artificial Intelligence (AI) was the domain of science fiction connoisseurs and Silicon Valley tech savants. Now, AI is ubiquitous in our daily lives, with a seemingly endless number of possible applications. As with any new and emerging technology, there are many novel questions and concerns that need to be addressed. Whether it be related to copyright ownership, ethics, cybersecurity obstacles, or discrimination and bias, concerns surrounding AI usage are mounting. AI system regulation has been rapidly increasing worldwide, while the U.S. regulatory landscape has remained relatively sparse. But it won’t be for long.
Last Friday, Facebook’s Oversight Board (“the Board”) issued its latest verdict, overturning the company’s decision to remove a post that moderators alleged violated Facebook’s Violence and Incitement Community Standard. This judgment brings the Board’s total number of decisions to seven, with the Board overturning the Facebook’s own decision in five out of the six substantive rulings it has issued. The Board’s cases have covered several topics so far, including nudity and hate speech. Because Facebook’s Oversight Board does not have any modern equivalents, it is worth exploring what went into this experiment’s formation.
On November 3, 2020 new rules from the Health and Human Services Department concerning information blocking in healthcare will come into effect. The rules are an implementation of the 21st Century Cures Act (“Act”) which is the latest in the government’s effort to lower costs and allow for greater patient access to electronic health information (“EHI”). The Act aims to prevent covered healthcare providers from restricting the flow of EHI in inappropriate ways. Violations of the new Act may result in considerable civil fines.
Earlier in 2019, a lawsuit was filed against University of Chicago Medicine, University of Chicago Medical Center, and Google. The suit claims that patient information was shared with google as part of a study aimed to advance the use of Artificial Intelligence, however, patient authorization was not obtained and the data used was not properly de-identified. In 2017, University of Chicago (UChicago) Medicine started sending patient data to Google as part of a project to look to see if historical health record data could be used to predict future medical events.
While the legal community has spent much of the last year exhaustively dissecting the European Union’s new General Data Protection Regulation (GDPR), nearly half of businesses in the United States are still not compliant with standards governing the collection, storage, and disposal of payment (credit/debit) card data. Businesses of all sizes should work to ensure that they understand and are in compliance with these standards, or risk significant exposure in the event of a payment card data breach traced back to their organization.
Financial institutions often rely on outside vendors to provide information technology services. While doing so often provides economic efficiency and quicker technological innovation, the risks associated with outsourcing information technology services are significant. Institutions must develop strong vendor management programs to ensure the safety of their customer’s personal information. Several large financial institutions have come together to create a new consortium to perform vendor and partner due diligence.