The way you should think of RL (both RLVR and RLHF) is the "elicitation hypothesis[1]." In pretraining, models learn their capabilities by consuming large amounts of web text. Those capabilities include producing both low and high quality outputs (as both low and high quality outputs are present in their pretraining corpora). In post training, RL doesn't teach them new skills (see E.G. the "Limits of RLVR"[2] paper). Instead, it "teaches" the models to produce the more desirable, higher-quality outputs, while suppressing the undesirable, low-quality ones.
I'm pretty sure you could design an RL task that specifically teaches models to use modern idioms, either as an explicit dataset of chosen/rejected completions (where the chosen is the new way and the rejected is the old), or as a verifiable task where the reward goes down as the number of linter errors goes up.
I wouldn't be surprised if frontier labs have datasets for this for some of the major languages and packages.
[1] https://www.interconnects.ai/p/elicitation-theory-of-post-tr...
In Stackoverflow data is trivial to edit and the org (previously, at least) was open to requests from maintainers to update accepted answers to provide more correct information. Editing is trivial and cheap to carry out for a database - for a model editing is possible (less easy but do-able), expensive and a potential risk to the model owner.
And then you point out issues in a review, so the author feeds it back into an LLM, and code that looks like it handles that case gets added... while also introducing a subtle data race and a rare deadlock.
Very nearly every single time. On all models.
That's a langage problem that humans face as well, which golang could stop having (see C++'s Thread Safety annotations).
Meanwhile I copy-pasted a Python async TaskGroup example from the docs and still found that, despite using a TaskGroup which is specifically designed to await every task and only return once all are done, it returned the instant theloop was completed and tasks were created and then the program exited without having done any of the work.
Concurrency woo~
Claude 4.6 has been excellent with Go, and truly incompetent with Elixir, to the point where I would have serious concerns about choosing Elixir for a new project.
For a long time my motto around software development has been "optimize for maintainability" and I'm quite concerned that in a few years this habit is going to hit us like a truck in the same way the off-shoring craze did - a bunch of companies will start slowly dying off as their feature velocity slows to a crawl and a lot of products that were useful will be lost. It's not my problem, I know, but it's quite concerning.
Maybe the best way is to do the scaffolding yourself and use LLMs to fill the blanks. That may lead to better structured code, but it doesn’t resolve the problem described above where it generates suboptimal or outdated code. Code is a form of communication and I think good code requires an understanding of how to communicate ideas clearly. LLMs have no concept of that, it’s just gluing tokens together. They litter code with useless comments while leaving the parts that need them most without.