Open AI vs. The New York Times: The Essence of The Dispute

Photo - Open AI vs. The New York Times: The Essence of The Dispute
While The New York Times has left part of the story untold, Open AI has chosen to publicly address its clash with the publication's journalists. In a recent blog post, the AI market leader sheds light on its legal battle with the respected American daily.
In late December 2023, the NYT filed a lawsuit against OpenAI and its primary investor, Microsoft. The publication accused the AI system developers of copyright infringement, resulting in substantial financial losses. The New York Times alleged that OpenAI and Microsoft were exploiting the labor of its journalists, training AI for free using the content of their articles. 

Notably, choosing to discuss legal dispute details in a corporate blog is an unusual move. Companies typically avoid public commentary in such legal matters, preferring to let professional lawyers handle the proceedings. 

Yet, Open AI opted for an alternative route, leveraging the situation to “clarify our business, our intent, and how we build our technology.”

It has come to light that the company strategically collaborates with news providers, including industry bodies like the News Media Alliance, which represents about 2,000 American media entities. For example, Open AI proposed to the media that content in ChatGPT could display authorship, offering publishers new ways to connect with their audience. The developers stress that “training AI models using publicly available internet materials is fair use.” Fair use refers to conditions under U.S. law allowing the use of copyrighted material without the owner's prior consent, applicable when such use advances science, art, etc. However, Open AI points out that any content owner can block the chatbot from accessing their site (they claim that The New York Times did this in August 2023). 

Open AI has expressed surprise at the initiation of legal proceedings by the NYT. The company believed its negotiations with the newspaper’s editorial team were constructive. The final meeting with the newspaper's journalists was held just a week before the lawsuit was filed. Open AI was under the impression that The New York Times had agreed to allow their articles to be displayed in ChatGPT with source citations, potentially driving direct traffic to their website.

Nevertheless, Open AI admits that the editorial team was dissatisfied with instances where the chatbot exactly quoted their publications under certain conditions.  
Along the way, they had mentioned seeing some regurgitation of their content but repeatedly refused to share any examples, despite our commitment to investigate and fix any issues,
claims OpenAI.
The developers suspect that the dispute involves older NYT publications that have been widely republished by other outlets. They also suggest that “it seems they intentionally manipulated prompts, often including lengthy excerpts of articles,” which then led to chat responses closely mirroring the original source – The New York Times website. 

Open AI emphasizes that improving the chatbot's resistance to data extraction attacks is a top priority, though the issue is still quite pressing. For instance, it was revealed at the end of 2023 that such information could be extracted from ChatGPT using basic prompts. 

“We regard The New York Times’ lawsuit to be without merit. Still, we are hopeful for a constructive partnership with The New York Times and respect its long history, which includes reporting the first working neural network over 60 years ago and championing First Amendment freedoms,”  concludes OpenAI.

The closing remarks are somewhat conciliatory. Yet, it remains uncertain what exactly sparked OpenAI’s strong public reaction. Additionally, the reason behind The New York Times' abrupt withdrawal from the negotiations with the ChatGPT developer is still unclear. Thus, the upcoming court decision is expected to significantly influence the future of the AI market. 

Previously, we reported that World Economic Forum experts identified artificial intelligence technology as a primary threat to global stability.