Bun's fork of Zig was just an unsound hack that at best would have produced a strictly inferior speedup compared to our current work with incremental compilation, which is already plenty usable:
- June 2025 core team starts using it with the zig compiler itself https://ziglang.org/devlog/2025/#2025-06-14
- April 2026 https://ziglang.org/devlog/2026/#2026-04-08
> Zig's AI stance is ridiculous & politically-motivated
It's literally an issue with our business model to mess with our contributor pipeline, can't get more concrete than this.
Well, presumably they want to contribute to the compiler. I know that you did not like those contributions, and that view seems entirely valid, but obviously "no AI" rules out their development model (by design, and you likely think that's good, and maybe it is!).
Not intending to defend the bun move, but obviously a project using Zig and also using AI might feel motivated to avoid Zig since they're ruled out as contributors.
Not sure why you're inventing a stance for me to be arguing against, when the Zig compiler stance is publicly articulated as exactly what I'm describing.
The zig team is not that big. They don't have 200 core contributors to filter through the noise and mine PRs for "gems".
I think an outright rejection of AI contributions makes sense, regardless, and has nothing to do with politics. A Zig developer was forced into writing a long-form post to justify rejecting Bun's awful contribution (lest their PR be sullied, and then it was anyways), and the act of writing that post probably took 10 or 20x more human time and effort than Bun's contribution. Now multiply that by 100 for every random fucking moron with an LLM submitting a contribution. That is not sustainable. Open source maintainers of popular projects would have to make rejecting AI PRs their full time job and stop developing the project itself altogether, if they took them seriously and reviewed at length to conclusively identify whether a PR is good or bad. Given that 99.99% of AI PRs are bad, it's simply not worth it. You cannot possibly expect humans to spend more time reviewing code than drive-by contributors spent generating it, especially when many of them are unpaid volunteers. It's an absolutely ridiculous expectation.
> An example of this is the changes to type resolution which happened in the 0.16.0 release cycle—these didn’t affect users too much, but had big implications for the compiler implementation. Before those changes, the compiler’s behavior was often highly dependent on the order in which types and declarations were semantically analyzed by the compiler. Some orders might result in successful compilation, while others give compile errors. Single-threaded semantic analysis prevented these bugs from causing user-facing non-determinism. The rewritten type resolution semantics were designed to avoid these issues, but Bun’s Zig fork does not incorporate the changes (and has not otherwise solved the design problems), which means their parallelized semantic analysis implementation will exhibit non-deterministic behavior. That’s pretty much a non-starter for most serious developers: you don’t want your compilation to randomly fail with a nonsense error 30% of the time.
There is a reason why, zig is upholding the quality and they hate it.