I’ve recently noticed this opinion seems unpopular, at least on Lemmy.
There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.
My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai
I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.
I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.
Agree for these reasons:
Legally: It’s always been legal (in the US at least) to relay the ideas in a copywrited work. AI might need to get better at providing a bibliography, but that’s likely a courtesy more than a legal requirement.
Culturally: Access to knowledge should be free. It’s one of the reasons public libraries exist. If AI can help people gain knowledge more quickly and completely, it’s just the next evolution of the same idea.
Also Culturally: Think about what’s out on the internet. Millions of recipes, no doubt copied from someone else, with pages of bullshit about how the author “grew up on a farm that produced Mohitos”. For decades now, “content creators” have gotten paid for millions of low quality bullshit click bait articles. There’s that. Most of the real “knowledge” on the internet is freely accessible technical / product documentation, forum posts like StackOverflow, and scientific studies. All of it is stuff the authors would probably love to have out there and freely accessible. Sure, some accidental copywrite infringement might happen here and there, but I think it’s a tiny problem in relation to the value that AI might bring society.