Technology

Federal Judge In SF Rules That AI Company Anthropic Did Not Violate Copyright Law In Training Its Chatbot

In what is being seen as an important early judicial ruling for the AI industry, a federal judge in San Francisco has ruled that Anthropic did not break the law when it used copyrighted material to train its AI chatbot Claude. The company will have to go to trial, however, over its use of pirated […]

Published

on


In what is being seen as an important early judicial ruling for the AI industry, a federal judge in San Francisco has ruled that Anthropic did not break the law when it used copyrighted material to train its AI chatbot Claude. The company will have to go to trial, however, over its use of pirated copies of books.

US District Judge William Alsup issued a pretrial ruling late Monday that absolves San Francisco-based Anthropic, for now, over the issue of the use of books and copyrighted material to train its AI model. As the Associated Press reports, Alsup was convinced by Anthropic’s attorneys that reading the material into their large language models qualifies as “fair use” under copyright law, because the product produced, the chatbot, was “quintessentially transformative.”

“Like any reader aspiring to be a writer, Anthropic’s [AI large language models] trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Alsup wrote in his ruling.

But, Alsup said that a trial could proceed on the question of how Anthropic collected the books it first fed into Claude, namely from pirated copies found on the internet. Internal communications at the company allegedly reveal that employees knew this could spell trouble, and only later did they pay for digital copies of the books.

Alsup wrote that “Anthropic had no entitlement to use pirated copies for its central library.” And, the fact “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages.”

This decision may set some precedent in the ongoing battles over chatbots and the fast-and-loose way in which companies including Anthropic and OpenAI have scraped the internet, copyrights be damned, to train the robots how to write and respond to human prompts.

A case with a somewhat different angle is headed to trial in New York, in which the New York Times and other publishers are suing OpenAI for the way in which it fed mass amounts of articles into its ChatGPT and other models. In that case, which a judge in March ruled could head to trial, attorneys for the Times argue both that OpenAI scoured its archive without payment, and that its model reproduces Times reporting in ways that are not “transformative,” as the “fair use” doctrine requires.

The Harvard Law Review noted in April that the Times is arguing the exact opposite case than it did 24 ago in a case involving freelance writers, New York Times Co. v. Tasini. The Times is now arguing for the “creative, deeply human work of journalists,” when in the earlier case, it fought to protect its own financial interests against the copyright interests of freelancers. The Supreme Court, in an opinion written by Ruth Bader Ginsberg, ruled in favor of the freelancers, who said their copyrights had been violated when the Times and other publications fed their work into databases devoid of the context in which it was originally written, and without compensation.

Previously: Meta’s AI Efforts Include Huge Privacy Flub; Sam Altman Says Meta’s Been Trying to Poach OpenAI Staff

Top image: In this photo illustration, a person holds a smartphone displaying the logo of “Claude,” an AI language model by Anthropic, with the company’s logo visible in the background, illustrating the rapid development and adoption of generative AI technologies, on December 29, 2024 in Chongqing, China. Artificial Intelligence (AI) has become a cornerstone of China’s strategic ambitions, with the government aiming to establish the country as a global leader in AI by 2030. (Photo illustration by Cheng Xin/Getty Images)



Link

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version