Post by : Bianca Suleiman
Photo: Reuters
In an important case for the artificial intelligence (AI) world, a U.S. federal judge has decided that the AI company Anthropic did not break copyright laws when it trained its chatbot, Claude, using millions of books. However, the company will still face trial later this year for how it got those books — by downloading them from illegal websites known as “shadow libraries.”
Judge William Alsup, who works at a court in San Francisco, shared his ruling on Monday. He said that the way Anthropic used the books was allowed under U.S. copyright law. This is because the AI system didn’t copy the books word-for-word, but instead used the ideas to create new and different content. He said this kind of use is “transformative,” meaning it changes the original in a big way.
“Like a person who reads books to learn how to write better, Anthropic’s AI model used books not to copy them but to create something new,” Judge Alsup wrote.
Still, the judge didn’t agree with everything Anthropic did. He said the company had no right to use pirated copies of books. A trial is now scheduled for December to decide if Anthropic is guilty of stealing those books.
Three authors — Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson — filed a lawsuit against Anthropic last year. They said the company stole their hard work and tried to make money from it. They accused Anthropic of “large-scale theft” and said it used human creativity to build its product without paying for it.
Books are very important for training AI chatbots like Claude. These books contain billions of words that help the AI learn how to write and talk like a human. To get this training data, some tech companies have used illegal websites to download free copies of books, instead of buying them.
In the court documents, it was revealed that some employees at Anthropic were worried about using these pirate sites. After these concerns, the company changed its method. Anthropic brought in Tom Turvey, who used to work at Google Books. With his help, the company started buying books in bulk. They removed the covers, scanned the pages, and then used the scanned text to train their AI.
But Judge Alsup said that buying the books later didn’t erase the fact that Anthropic had already used pirated copies. “Buying a copy of a book after stealing it first doesn’t erase the theft,” he wrote. However, this could affect how much money Anthropic might have to pay if it loses the case.
This ruling might be important for other lawsuits that are happening right now. Companies like OpenAI (maker of ChatGPT) and Meta (owner of Facebook and Instagram) are also being sued for using books and other writings without permission.
Anthropic was started in 2021 by former employees of OpenAI. It has often claimed to be a more responsible and safety-focused AI company. But the authors who filed the lawsuit said the company’s actions don’t match its promises. They believe building an AI on pirated books goes against the company’s claims of being ethical.
On Tuesday, Anthropic said it was happy the judge agreed that training AI is “transformative” and supports creativity. However, the company didn’t talk about the piracy issue in its statement.
Mattel Revives Masters of the Universe Action Figures Ahead of Film Launch
Mattel is reintroducing Masters of the Universe figures in line with its upcoming film, tapping into
China Executes 11 Members of Criminal Clan Linked to Myanmar Scam
China has executed 11 criminals associated with the Ming family, known for major scams and human tra
US Issues Alarm to Iran as Military Forces Deploy in Gulf Region
With a significant military presence in the Gulf, Trump urges Iran to negotiate a nuclear deal or fa
Copper Prices Reach Unprecedented Highs Amid Geopolitical Turmoil
Copper prices soar to all-time highs as geopolitical tensions and a weakening dollar boost investor
New Zealand Secures First Win Against India, Triumph by 50 Runs
New Zealand won the 4th T20I against India by 50 runs in Vizag. Despite Dube's impressive 65, India