I was doing this for fun, and sharing with the hopes that someone would find them useful, but sorry. The well is poisoned now, and I don't my outputs to be part of that well, because anything put out with well intentions is turned into more poison for future generations.
I'm tearing the banners down, closing the doors off. Mine is a private workshop from now on. Maybe people will get some binaries, in the future, but no sauce for anyone, anymore.
and my internet comments are now ... curated in such a way that I wouldn't mind them training on them
https://maggieappleton.com/ai-dark-forest
tl;dr: If anything that lives in the open gets attacked, communities go private.
Not quite. Since it has copyright being machine created, there are no rights to transfer, anyone can use it, it's public domain.
However, since it was an LLM, yes, there's a decent chance it might be plagiarized and you could be sued for that.
The problem isn't that it can't transfer rights, it's that it can't offer any legal protection.
Maybe you meant to include a "doesn't" in that case?
Any human contributor can also plagiarize closed source code they have access to. And they cannot "transfer" said code to an open source project as they do not own it. So it's not clear what "elephant in the room" you are highlighting that is unique to A.I. The copyrightability isn't the issue as an open source project can never obtain copyright of plagiarized code regardless of whether the person who contributed it is human or an A.I.
As per the US Copyright Office, LLMs can never create copyrightable code.
Humans can create copyrightable code from LLM output if they use their human creativity to significantly modify the output.
https://resources.github.com/learn/pathways/copilot/essentia...
how much use do you think these indemnification clauses will be if training ends up being ruled as not fair-use?
poetic justice for a company founded on the idea of not stealing software
> If any suggestion made by GitHub Copilot is challenged as infringing on third-party intellectual property (IP) rights, our contractual terms are designed to shield you.
I'm not actually aware of a situation where this was needed, but I assume that MS might have some tools to check whether a given suggestion was, or is likely to have been, generated by Copilot, rather than some other AI.
If they wanted to, they could take that output and put you out of business because the output is not your IP, it can be used by anybody.
Those who lived through the SCO saga should be able to visualize how this could go.
So it is said, but that'd be obvious legal insanity (i.e. hitting accept on a random PR making you legally liable for damages). I'm not a lawyer, but short of a criminal conspiracy to exfiltrate private code under the cover of the LLM, it seems obvious to me that the only person liable in a situation like that is the person responsible for publishing the AI PR. The "agent" isn't a thing, it's just someone's code.