Obviously these issues existed before AI, but they required active deception before. Regurgitating others people's code just becomes the norm now.
It obviously depends on how powerful AI is going to become. These scenarios are mutually exclusive because some assume that AI is actually not very powerful and some assume that it is very powerful. I think one of these things happening is not at all unlikely.
In essence, we get the output without the matching mental structures being developed in humans.
This is great if you have nothing left to learn, its not that great if you are a newbie, or have low confidence in your skill.
> LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
> https://arxiv.org/abs/2506.08872
> https://www.media.mit.edu/publications/your-brain-on-chatgpt...
But in the present case the authorship is just removed by shredding the library and then piecing back together the sentences. The fact that under some circumstances AIs will happily reproduce code that was in the training data is proof positive they are to some degree lossy compressors. The more generic something is ("for (i=0;i<MAXVAL;i++) {") the lower the claim for copyright infringement. But higher level constructs past a couple of lines that are unique in the training set that are reproduced in the output modulo some name changes and/or language changes should count as automatic transformation (and hence infringing or creating a derivative work).
The people using GenAI should be the ones doing the verification. The maintainer's job should not meaningfully change (other than the maintainer using AI to review on incoming code, of course).
Why does everyone who hears "AI code" automatically think "vibe-coded"?