NYT Sues OpenAI Over Use of Articles to Train ChatGPT, Alleging Harm to Content Value and Reputation
-
The NYT lawsuit against OpenAI raises novel arguments about the value of its content for training AI systems and the potential reputational damage caused by AI hallucinations.
-
The NYT claims its trusted, accurate content has enhanced value for training AI compared to other data sources, challenging the typical "fair use" defense.
-
As a paywalled source, the NYT argues ChatGPT's recreations of articles causes commercial harm by denying site visits and revenue.
-
The lawsuit asserts AI hallucinations falsely attributed to the NYT may cause reputational damage as users rely on invented summaries.
-
By focusing on the special value of data and risks of hallucinations, the case brings new angles beyond typical copyright claims that could impact open AI training.