Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.
Submitting patches is joining forces and helping out.
Bold of you to assume they have the expertise.
[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
I love Rust, but you couldn't pick a language with slower compile times... XD
Linking is also slow, and the extreme amounts of metadata produced for LLVM almost serves as a benchmark for LLVM's throughput, but that's all in an effort to produce faster, better binaries in the end.
On godbolt.org, Hello World compiles and runs in about 250ms. Zig's Hello World compiles and runs in 600ms. Of course Zig is still an unfinished language so optimisations like these are probably hardly a priority, but when it comes to lines of code per second, the difference isn't as big as people make it out to be.
What will make the most difference is how many crates the rewrite will pull in. The PORTING.md file specifies "No `tokio`, `rayon`, `hyper`, `async-trait`, `futures`" for the second phase, which should definitely get rid of the excessive compile time many people associate with Rust projects.
I guess it's all relative.
I find Rust's compile times abhorrent and it's objectively slower than many many other languages that also pull in dependencies left, right, and center. I guess that just means Rust scales very badly with amount of code.
I'd put it at a bit better than Haskell, but honestly not by much.
I really wish Rust would focus much more on compile times, or on making smaller parallel compilation units. It's quite a chore to have to keep splitting your program into smaller and smaller crates just to not sit and wait for an eternity.
As a comparison my CI job for Rust takes 14m running on a 16vCPU machine while my much larger TypeScript project compiles in 1m on a 2vCPU machine. I know people that have to spend quite a lot of work on keeping compile times manageable for Rust (nix, smaller crates, aggressive caching, etc etc).
Rust still brings me enough value that I'll stick with it, but one can still dream of a better future :)
The patch would have been rejected either way because it was out of date and conflicted with other work going on.
LLMs promote a decoupling of mental models and the actual codebase.
As much as some may want to believe, just reviewing what the LLM outputs is not equivalent to thinking about implementation details, motivations, exactly how and why things are, and how and why they work the way they do, and then writing it yourself. The process itself is what instills that knowledge in you.
Sucks for people who were invested in contributing to Bun and don't like working with AI tools to be sure, but I think the writing was on the wall for them pretty much immediately post-acquisition. You must admit, it's hard to predict that 100% of source lines will be written by AI if you're not walking the walk!
That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.
The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.
Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.
- the scale of how much and how fast you can generate code with AI vs how fast can you write code for compiler
- the mental model of what is being generated and how much the contributor understands and owns the generated code
High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.
So it is not, by your own admission, "exactly, literally the same".
Vide-coders often don't read, let alone understand, the code they send for PRs.
(Though I don't know if this particular patch series would get accepted on its own merits.)
split into a bunch of much smaller changes?
There's no reason to assume my generic statement was talking about the ugly version rather than the nicely organized version.
Zig, as programming language, has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will, as it's a fundamental building block of everything that uses Zig. Just that could be worth the extra scrutiny on every individual commit.
There's also the usual arguments: copyright ethics, environmental ethics and maintainer burden.
Couldn't you say exactly the same about bun?
I guess there are 2 philosophies in software development: move fast and break things and move at a pace that guarantees everything is rock solid.
Most commercial software, Anthropic included is taking the former path, while most infrastructure teams are taking the later.
I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.
PostgreSQL, a famously slow and rock solid project, accepts LLM-based contributions. But they are held to the same high standard, if you cannot explain the patch you submitted it likely get rejected.
Zig is famous for taking the former path! Anyone using Zig for a few years knows every release breaks things, and they are still making huge changes which I would classify as “moving fast”, like the recent IO changes!
Both appear to be[1][2]. FreeBSD doesn't have a formal policy yet, but they appear to be leaning towards admitting some degree of LLM contribution.
[1]: https://docs.kernel.org/process/coding-assistants.html
[2]: https://forums.freebsd.org/threads/will-freebsd-adopt-a-no-a...
You can be against a particular technology without being "anti-technology".
See DRM/surveillance/bad self driving implementations.
Just because a thing exists doesn’t mean you have to use it for everything. You don’t use asbestos blanket? Why are you so against asbestos?
No, they were prevented from doing so because the Zig devs didn't like the proposed changes and are preparing a more comprehensive improvement.
So the next step will be that bun will be directly re-written from scratch at every iteration, the repository will only contains the specs for the LLMs.
Caching locally the generated code will be authorized for some transition period, but as it’s obviously very dangerous to let people tweak what exactly computers are doing, forbidding such a practice using safe secure boot mandatory mode is already planed. Only nazi pedophiles would do otherwise anyway, thus the enactment of the companion law is an obvious go to.
The emitted AST has a lower defect rate since it incorporates strong types and in-built error handling. Other pros include native code and portability, but downside is the compile time.
People say same about Go as well that it's type system and limited feature set makes it the best AI friendly language but there too, it just seems like a hunch rather than a proven fact.
Let me elaborate further - it's like the proficiency of LLMs in writing English vs writing Sawahili or Kurdish.
The types of a program are like Swahili or Kurdish etc even worse because those languages still have sizeable chuck on the Internet and digital archives but types of a program are very specific to it.
Programming languages, in contrast, are constructed and vary much more in their designs. They are formal languages, making them closer to math than spoken language. LLMs being able to describe concepts more thoroughly and precisely through more expressive semantics obviously makes some languages more suitable than others.
The type system of a language is just one aspect of it that allows the language to provide guarantees to the LLM (and the user) about correctness of the code it's writing.
I am not speaking about specific types in specific programs. I am talking about the ability to describe complex constraints that LLMs (and humans) end up using to make writing correct code easier and more productive. Some programming languages absolutely are more effective at this than others, and that's always been true even before LLMs.
The last time I had a go with Haskell, the errors reminded me so much of hellish terminal compilers from the 80s and 90s that I quickly gave up. Been there, not doing that again.
As a downside, the compile time is somewhat offset once you're using agents (and especially parallel agents) anyway. Since all of your edits cost a round-trip API call to a third party server, you can accept a slightly slower compile step.
Lock the syntax/api together for a couple of years. Allow AI code in Zag.
Review after a few years, see which is better.
I'm not a huge fan of Rust, but I guess having a project like Bun in an actually memory safe language is probably a win? Guess it depends on how good Claude is at writing Rust code...
And will Rust team accept their vibe coded patches?
They didn't.