edit: Wow people did not read the policy. It's literally just "if you use an LLM you are responsible for it, we will reject low quality PRs, please disclose that you have used an LLM". This is bog standard.
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.
> We carve out a space for "experimentation" to inform future revisions to this policy.
Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.
> These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
This section is an extremely useful reference
https://github.com/jyn514/rust-forge/blob/llm-policy/src/pol...
It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:
> The following are allowed. > Asking an LLM questions about an existing codebase. > Asking an LLM to summarize comments on an issue, PR, or RFC...
Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?
The Linux policy on this is much superior and more sensible.
Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.
> It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]
Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.
What?
I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.
What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.
Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").
I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.
This is not very different from the Linux kernel's policy so it's an odd comparison. It's actually almost identical in practical terms.
edit: lol proof that this doc needs to be stupidly explicit is in the pudding with the HN comments going out of their way to radically misread it
> Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
What are they going to do go back and reject a bug if someone later admits they found it with an LLM? Honestly they and most other project would probably be better off just ignoring the situation until norms start developing.
If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
A llm generated bug that pretends it was a human created bug would be trying to abuse that presumption of validity, and therefore considered a dick move.
That's in the "allowed with caveats" section. It's just saying to not open bug reports without first reading them yourself or your bug may be closed. No one is saying "by policy we will have to add the bug back in" jesus christ
The policy is insanely straightforward, idk how you can be misinterpreting it this badly. It's just "Disclose that you use a model, you are on the hook for reviewing model output as a human" and then some clear cut examples.
Will it fix a related but different problem? Likely.
Before you knee jerk hate on the team for being luddites, consider:
1. For a language like rust there’s too few eyes and too many mouths. Reviewing is a job, and is extremely taxing. 2. The code base needs to be highly hermetic because it’s load bearing across the global economy 3. Most changes are only relevant if they’ve followed extensive process, including community feedback.
> People must be vouched for before interacting with certain parts of a project (the exact parts are configurable to the project to enforce).
https://github.com/mitchellh/vouch
I think many projects will adopt this instead of allowing everyone / blocking everyone
Many projects have "ai slop" check in place to directly close and ban user if it is "ai slop". Else, it will be hard to handle the velocity of PRs
I don't know if having your name/ face a secret is still acceptable? Maybe tiers of devs (anon vs other) on that one?
This policy does not seem to forbid vibe coding?
> Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally authored by an LLM are allowed, with disclosure.
Vibe coding (in its original meaning) would have hard time arguing it's of high quality.
At this point they will probably just fork yet again and maintain some vibe compiler.
[0]: https://x.com/jarredsumner/status/2051600118886138262
Given what you’ve said above it would be an easy task ‘accelerating quality and features exponentially’, so you’ll soon be able to show them (perhaps within days!), the error of their ways.
Please go do it now, we’ll wait.
But I believe it is not the reason Rust adopted this policy, I think they just have a more basal and subjective dislike of AI irrespective of whatever truth you may have just cited.
Rust is already well past 1.0. At best an LLM could discover a vulnerability (and the human using it can file a patch) or can help a human improve ergonomics.
The main problem is that the the problem space is vast and highly interconnected, the LLM needs to reason about the entire language every time it suggest an architectural change, but it can't, so it suggests local changes that make sense to me - a language hobbyist - then runs into much more difficult problems down the road.
Maybe Mythos with a lot of (competent) human hand-holding and pre-design can do it.
I sure hope so. I expect the end result will disprove the following:
> The Rust team will never be able to catch up to them
The AI jackasses have been braying in this key for going on a few years now, and there hasn't been one single time any of this breathless noise has resulted in something meaningfully superior. It's time to put up or shut up. Enough bullshit talk. If you can vibeslop a better Rust (or whatever), JFDI and leave everyone behind.
What do you think is stopping anyone from starting a fork right now? Is it a licensing issue?
Oh... I can’t say for certain who wrote it, and I won’t make any definitive claims - personally, I tend to think it was probably mostly written, or at least conceived, by a man - but this sort of phrase… I get a nervous twitch every time I see it, even though it’s actually quite a clever rhetorical device. Hell... Maybe I just need a break; I don’t know, since I’m starting to see LLMs everywhere...