Mass Law Blog

Artificial Intelligence May Result In Human Extinction, But In the Meantime There’s a Lot of Lawyering To Be Done

by | Jun 6, 2023

Any sufficiently advanced technology is indistinguishable from magic.   Arthur C. Clarke

Do you recall when Netscape Navigator was released in December 1994? I suspect not. You may not have been born or you might have been too young to take notice. This software marked the public’s first widespread introduction to a user-friendly web browser, and it was a big deal at the time. I remember buying a copy on disk from Egghead Software in Boston and trying (with limited success) to access the World Wide Web using a dialup modem.

Of course, few people foresaw that Navigator would set off the “dotcom boom,” which ended with the “dotcom crash” in 2000, only to be followed by the ubiquitous internet technologies we live with today

Will OpenAI’s release of ChatGPT in November 2022 be remembered as a similar event for artificial intelligence? It may prove to be as significant as the invention of the printing press, as Henry Kissinger and his co-authors suggest. (link, WSJ paywall). Or, it may signal the demise of the human race, as Stephen Hawking warned. Or, once the novelty wears off it may simply become another forgotten episode in the decades-long AI over-hype cycle

However, for now one thing is clear: there will be a lot of lawyering to be done. 

When the internet took off in the late ‘90s a vast number of legal issues emerged, keeping lawyers busy for years afterwards. However, it wasn’t immediately obvious that this would occur, and issues emerged slowly over time. By comparison, in 2023 generative AI – as represented by ChatGPT and other “large language models” (LLMs) – has landed like a bomb. Seemingly hundreds of new products are being released daily. Countless new companies are being formed to join the “AI gold rush” of 2023. 

So, what are the legal issues (so far)? Warning, this topic is already vast, so what follows is a selective summary. I hope to write about these issues in more detail in the future. 

AI and Copyright Law

At least for the moment, when it comes to generative AI, copyright law is the primary battlefield. The copyright issues can be divided into legal questions that deal with the “output” side versus the “input” or “training” side.

The Output Issue – Who Owns An AI Model’s Output?  Who owns the copyright in works created by AI image generators, whether autonomously created or a human-AI blend?

I covered this topic from the perspective of the Copyright Office in a recent post. Generative AI Images Struggle for Copyright Protection. Shortly after I published that post the Copyright Office issued a policy statement, Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, reaffirming its policy that works be the product of human authorship, but noting that there is a substantial grey area that will need to be decided on a case-by-case basis:

[works] containing AI-generated material [may] also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.” Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of ” and do “not affect” the copyright status of the AI-generated material itself.

The Copyright Office has also created a web page tracking its activities in copyright AI, including an initiative to examine the copyright law and policy issues raised by AI technology. (link)

9th Cir.: Monkey selfie not copyright-protected

However, the Copyright Office is not the last word when it comes to AI and copyrightability – the federal courts are, and the issue is pending in a lawsuit before the Federal District Court for the District of Columbia (Thaler v. Perlmutter). Suffice it to say that there are no court decisions as yet, and all monkey-business aside who (if anyone) owns AI outputs scraped from the internet is an open (and rapidly evolving) question. 

The Input Issue – Can You Use Copyrighted Content To Train An AI LLM? The training stage of AI tools requires the scraping and extraction of relevant information from underlying datasets (typically the internet), which contain copyright protected works. Can AI companies simply hoover up copyright-protected works without the consent of their owners and use that material to “train” LLMs? 

AI companies think so, but content creators think otherwise. Getty has filed suit against Stability AI in the U.S. (link) and in a parallel case in London, claiming that this company illegally copied over 12 million photographs from Getty’s website to train Stable Diffusion. Other cases have been filed on the same issue (Andersen v. Stability AI, pending N.D. Cal.). The central legal issue in these cases is whether unauthorized use of copyrighted materials for training purposes is illegal or (as the AI companies are certain to assert) “transformative” and therefore protected under the fair use doctrine? The Supreme Court’s recent decision in Warhol v. Goldsmith is likely to have some bearing on this issue. (sarcasm ….)

While the legal question under U.S. law may be whether these activities qualify as fair use, in the EU the copyright aspects of training are likely to fall under the text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. (To go in depth on the TDM exceptions see Quintais, Generative AI, Copyright and the AI Act).

However, the issues surrounding output ownership and the utilization of copyrighted materials to train LLMs are merely scratching the surface. It’s easy to foresee that copyright law and AI are going to intersect in other ways. An example of this is cases involving music that copies an artist’s “artistic style” or “voice”. At present copyright law does not protect artistic style. But for the first time, music AI systems make it easy to copy a style or an artist’s voice, so there will be pressure on the law to address this form of copying. It remains to be seen whether copyright law will expand to encompass artistic style, or whether artists will have to rely on doctrines such as the right of publicity or “passing off.”

Liability Shield Under CDA/Section 230

While the copyright issues in AI are complex, the legal issues around AI are not limited to copyright. An AI could generate defamatory content.

 Even worse, it could be used to create harmful or dangerously wrong information. If a user prompts an AI for cocktail instructions and it offers a poisonous concoction, is the AI operator liable? What if the AI instructs someone on how to build a dangerous weapon, or to commit suicide? What if an AI “hallucination” (aka false information generated by an AI) causes harm or injuries?

Controversy over online companies’ liability for harmful content has led to countless litigations, and generative artificial intelligence tools are likely to be pulled into the fray. If ChatGPT and other LLMs are deemed to be “information content providers” they will not be immunized by Section 230, as it exists today. 

Supreme Court Justice Neil Gorsuch alluded to this during oral argument in Gonzalez v. Google, where he suggested that generative AI would not be covered by Section 230’s liability shield. (link, p. 49) Two of Section 230’s 1996 congressional co-authors have publicly stated that it does not – “If you are partly complicit in content creation, you don’t get the shield.” (link

Section 230 was enacted to protect websites from liability based on user inputs. However, given the unpopularity of Section 230 on both sides of the isle it seems unlikely that Congress will amend Section 230, or pass new laws, to limit the liability of AI companies whose products generate illegal content based on user inputs. The opposing view, as observed by internet law scholar Prof. Eric Goldman, is that “we need some kind of immunity for people who make [AI] tools . . . without it, we’re never going to see the full potential of A.I.” (link, behind NYT paywall).

Contracts and Open Source

Because copyright-protected works can also be the subject of contracts, there will be issues of contract law that intersect with copyright. We can already see examples of this. For example, In the Getty/Stability AI case mentioned above, Getty’s website terms prohibit the very use Stability AI made of Getty photos. A class action suit has been filed over AI scraping of code on Github without providing attribution required by the applicable OSS license agreements, raising questions about the risks of using AI in software development. That suit also relies on DMCA Section 1202 for removing copyright management information (CMI). 

These cases point to the fact that companies need to monitor and regulate their use of AI code generators to avoid tainting their code bases. Transactional lawyers (licensing, M&A) will have to be alert to these issues. Suffice it to say that there are significant questions over whether using open source code requires compliance with restrictions of the open source licenses. 

Patent Law

What if an AI creates a patentable invention? Similar to copyright, the USPTO will only consider inventions from “natural persons” and not machines. This has already been the subject of litigation (Thaler v. Vidal, CAFC 2022; yes, the same Thaler that’s the plaintiff in the copyright case cited above) and an unsuccessful Supreme Court appeal (cert. denied). The USPTO is conducting hearings concerning AI technologies and inventorship issues, however at present the law, as stated by the CAFC in Thaler, is that “only a natural person can be an inventor, so AI cannot be.” For the foreseeable future there will be significant questions about the ability to patent inventions that were conceived with the assistance of AI.

Government Regulation

Governments worldwide are waking up to the issues created by AI. The U.S. has issued policy statements and executive orders. The U.S. Senate is holding hearings into an AI regulatory framework. While there is no comprehensive U.S. federal regulatory scheme in place regarding AI technologies today, it may be just a matter or time. When AI scientists warn that the risk of extinction from AI is comparable to pandemics and nuclear war, Congress is likely to pay attention.

Pending federal umbrella legislation, individual agencies are not sitting on their hands. Every day seems to bring some new agency action. The FDA has weighed in on the regulation of machine learning-enabled medical devices. The SEC and CFTC have already weighed in, focusing on credit approval and lending decisions. The FTC is pondering how to use its authority to promote fair competition and guard against unfair or deceptive AI practices. Although not a government agency, in early January NIST, at the direction of Congress, published a voluntary AI Risk Management Framework that is likely to be one of the first of many industry standards. The CFTC, DOJ, EEOC, FTC have issued a joint statement expressing their concerns about potential bias in AI systems. (link) Homeland Security has announced formation of a task force to study the role of AI in international trade, drug smuggling, online child abuse and secure infrastructure.

Not wanting to be left behind, California, Connecticut, Illinois and Texas are starting to take action to protect the public from what they perceive to be the potential harms of AI technology. (For a deeper dive see How California and other states are tackling AI legislation)

However, it would be a mistake to focus solely on U.S. law. International law – and in particular the EU – will play a significant role in the evolution of AI. The EU’s Artificial Intelligence Act (a work in progress) is far ahead of the U.S. in creating a legal framework to regulate AI. While the AI Act would regulate the use of AI technologies only in Europe, it could set a global standard, much like the EU General Data Protection Regulation (GDPR) has done for privacy regulation. (To go in depth on this topic see Perkins Coie, The Latest on the EU’s Proposed Artificial Intelligence Act).

The Future of AI and Law

I expect that the topics I’ve touched on above will prove to be only the tip of the AI iceberg. Regulators are already looking at issues involving privacy, facial recognition, deep fakes and disinformation, substantive criminal law (criminal behavior by an AI), anti-discrimination law and racial or gender bias.

Just as it was nearly impossible to foresee the massive volume of litigation that would follow the growth of the internet following passage of the DMCA in 1998 and Section 230 of the CDA in 1996, the legal issues around AI are only beginning to be understood.

I’ve said it many times before on this blog, but I’ll say it again – stay tuned. And lawyers – Get ready! 

******************

Update, June 26, 2023: it didn’t take long for the first defamation suit to be filed. See Australian mayor readies world’s first defamation lawsuit over ChatGPT content.