upvote
I get the individual/corporation distinction, but how is a machine another tier here? It's a tool, it can't have any rights at all. The wielder has rights, and curtailing their rights depending on what tool they're using to exercise them seems strange. Potentially justifiable, but it's a different axis from the nature of the actor.
reply
Our positions are completely compatible. People are anthropomorphizing LLMs, saying that because humans train on protected works, then it is fine for LLMs to do the same.

If they have only the rights that their human creators have, then access to them cannot be sold, in the exact same way that I cannot sell you a database that I have collected filled with copyrighted material. The "humans do training too" argument only holds if you imbue LLMs with similar rights to humans.

I am allowed to sell myself (in a very limited capacity) to others for them to exploit my training, even if that training was on protected material, which is a privilege humans should have, but machines should not.

reply
Thing is, LLMs level of compression of training set mean that effectively, under the same rules that say you cannot sell that database filled with copyright material, the LLM is fine to sell. Because you have to be able to meaningfully trace each claim to final output (weights). For example, for some older stable diffusion model, it was calculated that each individual work addition or removal resulted in about 1-2 bits of change, meaning the same rules would qualify it as not derivative work.

However, because it is an issue with (at least historical) goals of copyright law, the common pattern that is evolving is that AI is not granted copyright of any work it generates, making it a bit of poison pill for some of the egregious ideas of corporate abuse. Not sure if the weights will be considered copyrightable either.

reply