OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.
Terrible analogy.
Which one? And why exactly?
The analogy talks about mixing samples of music together to make new music, but that’s what is happening in real life.
The computers learn human language from the source material, but they are not referencing the source material when creating responses. They create new, original responses which do not appear in any of the source material.
“Learn” is debatable in this usage. It is trained on data and the model creates a set of values that you can apply that produce an output similar to human speach. It’s just doing math though. It’s not like a human learns. It doesn’t care about context or meaning or anything else.
Okay, but in the context of this conversation about copyright I don’t think the learning part is as important as the reproduction part.