Obviously these issues existed before AI, but they required active deception before. Regurgitating others people's code just becomes the norm now.
It obviously depends on how powerful AI is going to become. These scenarios are mutually exclusive because some assume that AI is actually not very powerful and some assume that it is very powerful. I think one of these things happening is not at all unlikely.
In essence, we get the output without the matching mental structures being developed in humans.
This is great if you have nothing left to learn, its not that great if you are a newbie, or have low confidence in your skill.
> LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
> https://arxiv.org/abs/2506.08872
> https://www.media.mit.edu/publications/your-brain-on-chatgpt...
But in the present case the authorship is just removed by shredding the library and then piecing back together the sentences. The fact that under some circumstances AIs will happily reproduce code that was in the training data is proof positive they are to some degree lossy compressors. The more generic something is ("for (i=0;i<MAXVAL;i++) {") the lower the claim for copyright infringement. But higher level constructs past a couple of lines that are unique in the training set that are reproduced in the output modulo some name changes and/or language changes should count as automatic transformation (and hence infringing or creating a derivative work).
The people using GenAI should be the ones doing the verification. The maintainer's job should not meaningfully change (other than the maintainer using AI to review on incoming code, of course).
Why does everyone who hears "AI code" automatically think "vibe-coded"?
People are generally against change that forces them to change the way they used to do things. I'm sure most will have their reasons why they are against this particular change, but I don't think it will affect anything. The genie is out of the bottle, AI is here to stay. You either adapt or you will slowly wither away.
You missed the whole arab spring thing?
It needs to be modified by a human. No amount of prompting counts, and you can only copyright the modified parts.
Any license on "100% vibecoded" projects can be safely ignored.
I expect litigations in a few years where people argue about how much they can steal and relicense "since it was vibecoded anyway".
> In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of” and do “not affect” the copyright status of the AI-generated material itself.
[0] https://www.federalregister.gov/documents/2023/03/16/2023-05...
There's really 2 ways to argue this:
- Either AI exists and then it's something new and the laws protecting human creativity and work clearly could not have taken it into account and need to be updated.
- Or AI doesn't exist, LLMs are nothing more than lossily compressed models violating the licenses of the training data, their probabilistically decompressed output is violating the licenses as well and the LLM companies and anyone using them will be punished.
Ultimately LLMs (the first L stands for large and for a good reason) are only possible to create by taking unimaginable amounts of work performed by humans who have not consented to their work being used that way, most of whom require at least being credited in derivative works and many of whom have further conditions.
Now, consent in law is a fairly new concept and for now only applied to sexual matters but I think it should apply to every human interaction. Consent can only be established when it's informed and between parties with similar bargaining power (that's one reason relationships with large age gaps are looked down upon) and can be revoked at any time. None of the authors knew this kind of mass scraping and compression would be possible, it makes sense they should reevaluate whether they want their work used that way.
There are 3 levels to this argument:
1) The letter of the law - if you understand how LLMs work, it's hard to see them as anything more than mechanical transformers of existing work so the letter should be sufficient.
2) The intent of the law - it's clear it was meant to protect human authors from exploitation by those who are in positions where they can take existing work and benefit from it without compensating the authors.
3) The ethics and morality of the matter - here it's blatantly obvious that using somebody's work against their wishes and without compensating them is wrong.
In an ideal world, these 3 levels would be identical but they're not. That means we should strive to make laws (in both intent and letter) more fair and just by changing them.
You could even say it strongly would very strongly incentivize the LLM companies to be on their best behavior, otherwise people would start revoking consent en-masse and they'd have to keep training new models all the time.
If you want something more realistic, there would probably be time limits how long they have to comply and how much they have to compensate the authors for the time it took them to comply.
There absolutely are ways to make it work in mutually beneficial ways, there's just no political will because of the current hype and because companies have learned they can get away with anything (including murder BTW).
(Much of the apparent gain of the automatic search-copy-paste is wasted by skipping the review phase that would have been done at that time when that were done manually, which must then be done in a slower manner when you must review the harder-to-understand entire program generated by the AI assistant.)
Despite the fact that AI coding assistants are copyright breaking tricks, the fact that this has become somehow allowed is an overall positive development.
The concept of copyright for programs has been completely flawed from its very beginning. The reason is that it is absolutely impossible to write any kind of program that is not a derivative of earlier programs.
Any program is made by combining various standard patterns and program structures. You can construct a derivation sequence between almost any 2 programs, where you decompose the first in some typical blocks, than compose the second program from such blocks, while renaming all identifiers.
It is quite subjective to decide when a derivation sequence becomes complex enough that the second program should not be considered as a derivative of the first from the point of view of copyright.
The only way to avoid the copyright restrictions is to exploit loopholes in the law, e.g. if translating an algorithm to a different programming language does not count as being derivative or when doing other superficial automatic transformations of a source program changes its appearance sufficiently that it is not recognized as derivative, even if it actually is. Or when combining a great number of fragments from different programs is again not recognized as derivative, though it still kind of is.
The only way how it became possible for software companies like Microsoft or Adobe to copyright their s*t is because the software industry based on copyrighted programs has been jumpstarted by a few decades of programming during which programs were not copyrighted, which could then be used as a base by the first copyrighted programs.
So AI coding agents allow you to create programs that you could not have written when respecting the copyright laws. They also may prevent you from proving that a program written by someone else infringes upon the copyright that you claim for a program written with assistance.
I believe that both these developments are likely to have more positive consequences than negative consequences. The methods used first in USA and then also in most other countries (due to blackmailing by USA) for abusing the copyright laws and the patent laws have been the most significant blockers of technical progress during the last few decades.
The most ridiculous claim about the copyright of programs is that it is somehow beneficial for "creators". Artistic copyrights sometimes are beneficial for creators, but copyrights on non-open-source programs are almost never owned by creators, but by their employers, and even those have only seldom any direct benefit from the copyright, but they use it with the hope that it might prevent competition.
And that's why copyright has exceptions for humans.
You're right copyright was the wrong tool for code but for the wrong reasons.
It shouldn't be binary. And the law should protect all work, not just creative. Either workers would come to a mutual agreement how much each contributed or the courts would decide based on estimates. Then there'd be rules about how much derivation is OK, how much requires progressively more compensation and how much the original author can plainly tell you what to do and not do with the derivative.
It's impossible to satisfy everyone but every person has a concept of fairness (it has been demonstrated even in toddlers). Many people probably even have an internally consistent theory of fairness. We should base laws on those.
> abusing the copyright laws and the patent laws have been the most significant blockers of technical progress during the last few decades
Can you give examples?
> copyrights on non-open-source programs are almost never owned by creators, but by their employers
Yes and that's another thing that's wrong with the system, employment is a form of abusive relationship because the parties are not equal. We should fix that instead of throwing out the whole system. Copyright which belongs to creators absolutely does give creators more leverage and negotiating power.
Look, if you think I am wrong, you can surely put it into words. OTOH, if you don't think I am wrong but feel that way, then it explains why I see no coherent criticism of my statements.
The signal you’re sending is that you are not open to discussing the issue.
The playing field is level now, and corpo moats no longer exist. I happily take that trade.
They can wash the copyright by AI training, but the AIs don't get trained on closed source.
"corpo" also has a ton of patents, which still can't be AI-washed.
What will become unenforceable are Open Source Licenses exclusively, how does that make it a "level field"?
It's going to be very interesting to see 'cleanroom' kind of development in the AI age but I suspect it's not going to be such a walk in the park as some seem to think it will be. There are just too many vested interests. But: it would be nice to see someone do a release of say the Oracle source code as rewritten by AI through this progress, just to see how fast the IP hammer will come down on this kind of trick.
If the argument is just "They won't catch me", then yes you are correct.
But some of us are still forced to follow the law, whatever it might be.
Also: They still have patents on it.
Not to mention companies will try to mandate hardware decryption keys so the binary is encrypted and your AI never even gets to analyze the code which actually runs.
It's not sci-fi, it's a natural extension of DRM.
1) The financial aspect: As you say, more and more advanced DRM requires more and more advanced tools. Even assuming advanced AI can guide any human to do the physical part, that still means you have to pay for the hardware. And the hardware has to be available (companies have been known to harass people into giving up perfectly moral and legal projects).
2) The legal aspect: Possession of burglary tools is illegal in some places. How about possession of hacking tools? Right now it's not a priority for company lobbying, what about when that's the only way to decompile? Even today, reverse engineering is a legal minefield. Did you know in some countries you can technically legally reverse engineer but under some conditions such as having disabilities necessitating it and only using the result for personal use?[0]
3) The TOS aspect: What makes you think AI will help you? If the company owning the AI says so, you're on your own.
---
You need to understand 2 things:
- Just because something is possible doesn't mean somebody is gonna do it. Effort, cost and risk play huge roles. And that assumes no active hostile interference.
- History is a constant struggle between groups with various goals and incentives. Some people just want to live a happy life, have fun and build things in their free time. Other people want to become billionaires, dream about private islands, desire to control other people's lives and so on. People are good at what they focus on. There's perhaps more of the first group but the second group is really good at using their money and connections to create more money and connections which they in turn use to progress towards their primary objectives, usually at the expense of other people. People died[1] over their right to unionize. This can happen again.
Somebody might believe historical people were dumb or uncivilized and it can't happen today because we've advanced so much. That's bullshit. People have had largely the same wetware for hundreds of thousands of years. The tools have evolved but their users have not.
[0]: https://pluralistic.net/2026/03/16/whittle-a-webserver/ - "... aren't tools exemptions, they're use exemptions ... You have that right. Your mechanic does not have that right."
[1]: https://en.wikipedia.org/wiki/Pinkerton_(detective_agency)
AI proponents completely ignore the disparity of resources available to an individual and a corporation. If I and a company of 1000 people create the same product and compete for customers, the company's version will win. Every single time. Or maybe at least 1000:1 if you're an optimist.
They have access to more money for advertising, they have an already established network of existing customers, they have legal and marketing experts on payroll. Or just look at Microsoft, they don't even need advertising, they just install their product by default and nobody will even hear about mine.
Not to mention as you said, the training advances only goes from open source to closed source, not the other way around.
AI proponents who talk about "democratization" are nuts, it would be laughable if it wasn't so sad.
As a person who works for a company with 25k people, I would disagree. You, a single person will often get to the basic product that a lot of people will want much faster than a company with 1k, 5k and 25k people.
Bigger companies are constrained by internal processes, piles of existing stuff, and inability to hire at the scale they need and larger required context. Also regulation and all that. Bigger companies are also really slow to adapt, so they would rather let you build the product and then buy out your company with your product and people who build it. They are at at a temporary disadvantage every time the landscape shifts.
Besides that, your whole arguments hinges on large companies being inflexible, inefficient and poorly run. Isn't that exactly the kind of problem AI promises to solve? Complete AI surveillance of every employee, tasks and instructions tailored to each individual and superhuman planning. Of course at that point, the only employees will be manual workers because actual AI will be much better and cheaper at everything than every human, except those things where it needs to interact with the physical world. Even contract negotiations with both employees and customers will be done with AI instead of humans, the human will only sign off on it for legal requirements just like today you technically enter a contract with a representative of the company who is not even there when you talk to a negotiator.
If/when superhuman AI is achieved, those limitations will all go away. An owner will just give it money and control and tell it to optimize for more money or political power or whatever he wants.
That's a much scarier future than a paperclip maximizer because it's much closer and it doesn't require complete takeover first, it'll be just business as usual, except more somehow more sociopathic.
Nitpicking on the license here, but please don't use MIT, it has no patent grant protections.
And those are never covered in any AI-washing anyway.
There are equivalent licenses with patent grant protection, like 'Apache2+LLVM exception' or 'Mozilla Public License 2' and others...
You cannot keep a purely legally-enforced moat in the face of advancing technology.
In the USA the DMCA can make it illegal to even own and use tools meant to bypass even the weakest of protection.
This law has already been used to ruin lives.
"They might catch the individual but not us all" is nice and fine until it is your turn, so check your legislation.
IP law means nothing once tens of millions of people are openly violating it.
The software industry is about to learn this lesson too.
Uhm... yes? The cost of downloading pirated music is essentially zero. The only reason why people use services like Spotify is because it's extremely cheap while being a bit more convenient. But jack up the price and the masses will move to sail the sea again.
That is not necessarily true, depending on the level of enforcement and the availability of opportunities to steal.
> Same argument can be made for streaming, and yet Netflix is neither cheap nor struggling for subscribers.
Netflix is still pretty cheap for the convenience it provides. Again, jack up the price and see the masses move to torrent movies/shows again.
Yet.
A whole bunch of people I watch on youtube (politics, analysts, a weatherman) are already seeing AI impersonation videos, sometimes misrepresenting their positions and identities. This will grow.
So, you can't create art because that's extruded at scale in such a way that it's just turning on the tap to fill a specified need, and you can't be a person because that can also be extruded at scale pretty soon, either to co-opt whatever you do that's distinct, or to contradict whatever you're trying to say, as you.
As far as being a person able to exist and function through exchanging anything you are or anything you do for recompense, to survive, I'm not sure that's in the cards. Which seems weird for a technology in the guise of aiding people.
As far as I know that has only been decided in US so far, which is far from the whole world.
Everything else is various shades of "No, unless a human modified it"
edit: https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
How am I gonna prove I did?
They can just generate the same code with an AI assistant, and then it is you who cannot claim that their code infringes the copyright that you claim for the code that you have written with assistance.
So neither of the 2 parties that have used an AI assistant is able to prevent the other party to use the generated code.
I consider this as a rather good outcome and not as a disadvantage of using AI assistants. However, this may be construed as a problem by the stupid corporate lawyers who insist that any product of the company must use only software IP than is the property of the company.
These kind of lawyers are encountered in many companies and they are the main reason for the low software productivity that was typical in many places before the use of AI assistants.
I wonder how many of those lawyers have already understood that this new fashion of using AI is incompatible with their mandated policies, which have always been the main blocker against efficient software reuse.
Who can prove that I didn't write the code myself? And if I did, how am I to prove it?
That goes in both directions.
It's not like there is a watermark in the code telling the whole wide world that this was AI generated or human made.
So I write code (with or without an AI assistant) and claim copyright... they generate the same code. I sue them.
How does any of us prove that we wrote the code by hand?
It’s weird how people on HN state legal opinion as fact… e.g if someone in the Philippines vibecodes an app and a person in Equador vibecodes a 100% copy of the source, what now?
Model outputs are not copyrightable at all, only human work. That means the prompt, and whatever modifications done to output by human, are copyrighted, but nothing else.
HOWEVER, that does not mean the output can not violate copyright. Output of the model falls under same "derivative work" rules as anything else, AI just can't add its own "authorship". So if you accidentally or not recover script for a movie with serial numbers filed off, then its derivative work, etc. Same with code.
Everywhere else in the world is in various shades of "No, unless a human modified it"
https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
There's a threshold where you modify it enough, it is no longer recognizable as being a modification of the original and you might get away with it, unless you confess what process you used to create it.
This is different to learning from the original and then building something equivalent from scratch using only your memory without constantly looking back and forth between your copy and the original.
This is how some companies do "clear room reimplementations" - one team looks at the original and writes a spec, another team which has never seen the original code implements an entirely standalone version.
And of course there are people who claim this can be automated now[0]. This one is satire (read the blog) but it is possible if the law is interpreted the way LLM companies work and there are reports the website works as advertised by people who were willing to spend money to test it.
[0]: https://malus.sh/
These sorts of things are almost never tested legally and it seems even less likely now.
Plenty see {{some_woodworker}} as a traitor for this policy and will never contribute again if any clearly labeled table saw cuts is actually allowed to be used in furniture making.
A table saw isn't a probabilistic device.
Also I, a programmer, can immediately see whether the "probabilistic device" generated code that looks like it should.
Both just let me get to the same result faster with good enough quality for the situation.
I can grab a tape measure or calipers and examine the piece of wood I cut on the table saw and check if it has the correct measurements. I can also use automated tests and checks to see that the code produced looks as it should and acts as it should.
If it looks like a duck and quacks like a duck... Do we really need to care if the duck was generated by an AI?
I highly doubt that.
Empirical studies show that humans have very little effect on error rates when reviewing code. That effect disappears quickly the more code you read.
Most programmers are bad at detecting UB and memory ownership and lifetime errors.
A piece of wood comes off the table it’s cut or it’s not.
Code is far more complex.
And this is why we have languages and tooling that takes care of it.
There's only a handful of people who can one-shot perfect code in a language that doesn't guard against memory ownership or lifetime errors every time.
But even the crappiest programmer has to actually work against the tooling in a language like Rust to ownership issues. Add linters, formatters and unit tests on top of that and it becomes nigh-impossible.
Now put an LLM in the same position, it's also unable to create shitty code when the tooling prevents it from doing so.
These tools are nothing alike and the reductionism of this metaphor isn’t helpful.
Maybe someone bumped the fence aw while you were on a break, or the vibration of it caused the jig to get a bit out of alignment.
The basic point is that whether a human or some kind of automated process, probabilistic or not, is producing something you still need to check the result. And for code specifically, we've had deterministic ways of doing that for 20 years or so.
As with LLMs, where careless use results in you dropping prod db or exposing user data.
The worst part about all reactionary scares is that, because the behaviors are driven by emotion and feeling as opposed to any intentional course of action, the outcomes are usually counter productive. The current AI scare is exactly what you would want if you are OpenAI. Convince OSS, not to mention "free" software people, to run around dooming and ant milling each other about "AI bad" and pretty soon OSS is a poisonous minefield for any actual open AI, so OSS as a whole just sabotages itself and is mostly out of the fight.
I'm currently in the middle of trying to blow straight past this gatekeepy outer layer of the online discourse. What is a bit frustrating is knowing that while the seed will find the niches and begin spreading through invisible channels, in the visible channels, there's going to be all kinds of knee-jerk pushback from these anti-AI hardliners who can't distinguish between local AI and paying Anthropic for a license to use a computer. Worse, they don't care. The social psychosis of being empowered against some "others" is more important. Either that or they are bots.
And all of this is on top of what I've been saying for over a year. VRAM efficiency will kill the datacenter overspend. Local, online training will make it so that skilled users get better models over time, on their own data. Consultative AI is the future.
I have to remind myself that this entire misstep is a result of a broken information space, late-stage traditional social, filled with people (and "people") who have been programmed for years on performative clap-backs and middling ideas.
So fortunate to have some life before internet perspective to lean back on. My instinct and old-world common sense can see a way out, but it is nonetheless frustrating to watch the online discourse essentially blinding itself while doubling down on all this hand wringing to no end, accomplishing nothing more than burning a few witches and salting their own lands. You couldn't want it any better if you were busy entrenching.
The linux foundation itself, is just one big, woke, leftist mess, with CV-stuffers from corporations in every significant position.
The rest of the world looks on in wonder at both sides of this.