Authors Paul Tremblay and Mona Awad filed a lawsuit against OpenAI, accusing the company of using pirated books to train its ChatGPT models.
“Indeed, when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works,” the complaint reads.
Or, hear me out for a minute, if critiques, summaries or discussions about the works were in the training data. Unless the authors want to claim nobody ever talks about their works on the internet…
That’s the thing with AI: Unless the model creator provides a complete breakdown of the training material, as Llama, RedPajama or Stable Diffusion do for example, it’s basically impossible to prove what exactly is or isn’t in the training dataset.
The proof for this claim is seemingly simple. The authors never gave OpenAI permission to use their works, yet ChatGPT can provide accurate summaries of their writings. This information must have come from somewhere.
This is exceedingly weak evidence. It is possible to write a summary by training on… other summaries.