upvote
I don't see why they think it would work when the reason their patch set was rejected was because it was not correct, did not go in a direction the Zig authors were interested in and is also in an area where they are already working hard on improvements. It would have been much better if the bun team joined forces and helped out instead of vibe coding a broken PoC patch that never can get merged. Compilation speed is one of the current main focuses of Zig and changing the type system to make that possible was a big part of 0.16.

Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.

reply
Side note, but I think using LLMs like this to write PoCs in existing projects is actually a good idea to prove whatever you had in mind is feasible and worth it to pour time into. Obviously you need to not vibecode the entire thing once you're past that point though...
reply
> It would have been much better if the bun team joined forces and helped out

Submitting patches is joining forces and helping out.

reply
> It would have been much better if the bun team joined forces and helped out instead of vibe coding a broken PoC patch that never can get merged

Bold of you to assume they have the expertise.

reply
Bun folks routinely contribute to WebKit, and bun itself is an incredibly impressive project, so I don't think they're lacking expertise
reply
I think they do. Building bun is a complex task and engineers who can do that should also be able to figure out how to help out with a compiler. It is just a matter of immersing yourself in the code and be willing to put in the hours and hard work. Sure, they may not be able to help out with designing the type resolution but there is other work which needs to be done that any skilled engineer can do.
reply
Not only because the AI part, here's a discussion [0] about it

[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

reply
In the context of this post, that's absolutely hilarious they're vibe-porting their Zig codebase to Rust.

I love Rust, but you couldn't pick a language with slower compile times... XD

reply
Compiling Rust is actually quite fast in my experience. The problem with many Rust projects is that they pull in dependencies left, right, and center. Pulling in Tokio makes your project compile an entire thread management system even if you're just compiling Hello World, and simple oneliners containing macros can easily spread out into dozens of lines of code each.

Linking is also slow, and the extreme amounts of metadata produced for LLVM almost serves as a benchmark for LLVM's throughput, but that's all in an effort to produce faster, better binaries in the end.

On godbolt.org, Hello World compiles and runs in about 250ms. Zig's Hello World compiles and runs in 600ms. Of course Zig is still an unfinished language so optimisations like these are probably hardly a priority, but when it comes to lines of code per second, the difference isn't as big as people make it out to be.

What will make the most difference is how many crates the rewrite will pull in. The PORTING.md file specifies "No `tokio`, `rayon`, `hyper`, `async-trait`, `futures`" for the second phase, which should definitely get rid of the excessive compile time many people associate with Rust projects.

reply
>Compiling Rust is actually quite fast in my experience

I guess it's all relative.

I find Rust's compile times abhorrent and it's objectively slower than many many other languages that also pull in dependencies left, right, and center. I guess that just means Rust scales very badly with amount of code.

I'd put it at a bit better than Haskell, but honestly not by much.

I really wish Rust would focus much more on compile times, or on making smaller parallel compilation units. It's quite a chore to have to keep splitting your program into smaller and smaller crates just to not sit and wait for an eternity.

As a comparison my CI job for Rust takes 14m running on a 16vCPU machine while my much larger TypeScript project compiles in 1m on a 2vCPU machine. I know people that have to spend quite a lot of work on keeping compile times manageable for Rust (nix, smaller crates, aggressive caching, etc etc).

Rust still brings me enough value that I'll stick with it, but one can still dream of a better future :)

reply
deleted
reply
deleted
reply
> but were prevented from doing so because zig has a hard and fast "no AI code" rule

The patch would have been rejected either way because it was out of date and conflicted with other work going on.

reply
Makes me wonder why zig announced the strict LLM rule recently. I'm afraid one reason could be that zig doesn't want to accept code from the bun fork in the first place (because of LLM usage, deviation and other reasons)
reply
One non-obvious reason is that an important aspect of their community is to shepherd new contributors [1]. LLMs crushing everything would reduce that. More obvious is all the toil for maintainers dealing with LLM PRs (broadly it’s an issue). The Zig maintainers prefer to put their energy into improving people and fostering those relationship.

[1] https://kristoff.it/blog/contributor-poker-and-ai/

reply
It's important that developers have an accurate mental model of how things work, are structured and why.

LLMs promote a decoupling of mental models and the actual codebase.

As much as some may want to believe, just reviewing what the LLM outputs is not equivalent to thinking about implementation details, motivations, exactly how and why things are, and how and why they work the way they do, and then writing it yourself. The process itself is what instills that knowledge in you.

reply
Well said! I don't think either party is really at fault here, but if Anthropic wanted to contribute non-negligible amounts of code over time then it's an absolute dealbreaker.

Sucks for people who were invested in contributing to Bun and don't like working with AI tools to be sure, but I think the writing was on the wall for them pretty much immediately post-acquisition. You must admit, it's hard to predict that 100% of source lines will be written by AI if you're not walking the walk!

reply
Yeah, I remember when the lazy bastards started writing programs using compilers instead of learning assembly language. Now I don’t have a single colleague who can write assembly. There’s whole generations now who can’t code assembly. Most don’t even know what a register is. Hope Zig holds against this latest attempt to make everyone stupid.
reply
To add to the other commenters, loads of people don’t know assembly, which speaks to the quality of the average developer. The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.
reply
I'd be very surprised if the "average" developer across the board was in fact not just a JavaScript / TypeScript only developer. I have no expectations or really even hope that the average developer I work with has ever written a line of assembly.
reply
>The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.

That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.

reply
Knowing assembly in this day and age is the result of being curious and wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.

The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.

Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.

reply
Knowing assembly doesn’t mean you would spend your time writing assembly (aka being familiar with opcodes and architecture optimizations). But in the process, you get familiar with the working of the computer hardware and the OS that sits on top of it. That is always useful knowledge especially when needing to deal with binary format and protocols or FFI.
reply
Generating AI code/PR is not the same as using compilers because of at least two things:

- the scale of how much and how fast you can generate code with AI vs how fast can you write code for compiler

- the mental model of what is being generated and how much the contributor understands and owns the generated code

reply
Using an LLM isn't analogous to using a higher level language.
reply
That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic. That may be a problem but it’s still a higher level language, just a much higher level language than anything before.
reply
I assume you're some sort of programmer and I genuinely wonder how in the world can someone in good faith downplay non-determinism and ambiguity when talking about a programming language.

High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.

reply
So by your logic all the PMs, managers and customers are programmers, right? After all, there’s a human compiler that takes their input and produces a program?
reply
They are programmers when they write a prompt and get runnable code as a result, yes… but no if asking a human to write the code because if you have an intermediate, manual step between the text and the running code, you don’t have an automated process and hence it’s no longer even an application, let alone a “compiler”.
reply
The main difference is that the input to an LLM is in an ambiguous language.
reply
A programming language is allowed to be ambiguous, I don’t know of a definition that excludes that!
reply
All programming languages I know of provide at least some guarantees about the program’s behavior.
reply
The language specs may be, but an implementation is never ambiguous. When you encounter and undefined behavior in the specs, that’s when you look at your compiler/interpreter docs.
reply
So is JavaScript haha.
reply
> That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic.

So it is not, by your own admission, "exactly, literally the same".

reply
Take it gently, the poor thing doesn't understand the difference between code and talking about code.
reply
Your analogy falls apart because the "lazy bastards" still knew how to program and understood the code they were working on.

Vide-coders often don't read, let alone understand, the code they send for PRs.

reply
I don't think most JavaScript devs know how to read C code, let alone assembly, so I think the comparison is apt. Is it not?
reply
The JavaScript developers are checking in JavaScript code that they ostensibly understand. That is not the same as prompting an LLM to generate Zig that they don't understand, and expecting someone to merge it.
reply
There’s a big difference between (mostly) deterministic compiler and non-deterministic LLMs.
reply
That's a solid reason to keep LLMs away from the kind of tasks that help with onboarding. But a patch series from a competent team that changes 3000 lines should probably be evaluated on its own merits. Or at least, the collaboration-based reasons to reject AI don't apply and the real reason would be something else.

(Though I don't know if this particular patch series would get accepted on its own merits.)

reply
The recent article explained the bun patch would have been refused on technical merits as it's intrinsically incorrect, to be able to work properly it required some language changes.
reply
> patch series from a competent team that changes 3000 lines should probably be

split into a bunch of much smaller changes?

reply
I don't understand your suggestion. If you take an ugly patch series that changes 3000 lines and organize it into small quality changes, it's still a patch series that changes 3000 lines.

There's no reason to assume my generic statement was talking about the ugly version rather than the nicely organized version.

reply
perhaps not all of these 3000 line changes make sense?
reply
I mean in an authoritarian system you wouldn’t make a one off exception like that.
reply
There are other reasons why a project like Zig might not want to accept LLM generated contributions.

Zig, as programming language, has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will, as it's a fundamental building block of everything that uses Zig. Just that could be worth the extra scrutiny on every individual commit.

There's also the usual arguments: copyright ethics, environmental ethics and maintainer burden.

reply
> has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will

Couldn't you say exactly the same about bun?

reply
Sure, but Bun is now owned by a company who's entire shtick is creating AI models. That shifts priorities.
reply
It might be one of the reasons they want to migrate to Rust, i.e. to handle many these memory related issues by the compiler. Personally I used bun on a very few personal instances. But if you check issue reports, you will see memory bugs being reported say more than deno.
reply
The LLM rule has been a thing for a very long time at this point.
reply
>Makes me wonder why zig announced the strict LLM rule recently.

I guess there are 2 philosophies in software development: move fast and break things and move at a pace that guarantees everything is rock solid.

Most commercial software, Anthropic included is taking the former path, while most infrastructure teams are taking the later.

I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.

reply
> I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.

PostgreSQL, a famously slow and rock solid project, accepts LLM-based contributions. But they are held to the same high standard, if you cannot explain the patch you submitted it likely get rejected.

reply
> move fast and break things and move at a pace that guarantees everything is rock solid.

Zig is famous for taking the former path! Anyone using Zig for a few years knows every release breaks things, and they are still making huge changes which I would classify as “moving fast”, like the recent IO changes!

reply
Exactly, and Zig 0.16 is explicitly a release with known issues, just count the number of TODOs in the std.Io namespace.
reply
> I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.

Both appear to be[1][2]. FreeBSD doesn't have a formal policy yet, but they appear to be leaning towards admitting some degree of LLM contribution.

[1]: https://docs.kernel.org/process/coding-assistants.html

[2]: https://forums.freebsd.org/threads/will-freebsd-adopt-a-no-a...

reply
Possibly, but the Zig creator is active on Lobste.rs, where he's been vocally anti-LLM for a year now, so the timing could just be a coincidence.
reply
It's a combination of pragmatism (not wanting to wade through slop, not wanting to shove out newbie developers) and politics (usual contemporary techie progressive stuff that's now oddly anti-technology).
reply
> usual contemporary techie progressive stuff that's now oddly anti-technology

You can be against a particular technology without being "anti-technology".

See DRM/surveillance/bad self driving implementations.

reply
> usual contemporary techie progressive stuff that's now oddly anti-technology

Just because a thing exists doesn’t mean you have to use it for everything. You don’t use asbestos blanket? Why are you so against asbestos?

reply
Against blankets would be even more like that argument.
reply
deleted
reply
deleted
reply
I like your username.
reply
> but were prevented from doing so because zig has a hard and fast "no AI code" rule

No, they were prevented from doing so because the Zig devs didn't like the proposed changes and are preparing a more comprehensive improvement.

reply
The Zig maintainers did a pretty in-depth review of the PR, and laid out multiple technical reasons for why it would not get merged. They did not reject it simply for being vibe-coded (though that is likely the cause of it sucking).
reply
So if tomorrow Rust denied the "improvement" to upstream Rust then what's the next language they plan to vibe code it in?
reply
Rust is a significantly more mature language. Adoption of zig has to be done on the assumption that the language will significantly improve as your project evolves, and if those improvements don't agree with your project's goals you're in something of a lurch. Rust is basically finished and adopting it has to be done on the assumption it won't change very much. I don't know what their initial logic for adopting zig was, but I think porting to a more mature language was inevitable, unless by some miracle zig happened to rapidly mature in exactly the direction they wanted,
reply
Javascript
reply
C obviously.
reply
I was hoping bash because why not. It's AI that has to work and maintain anyway and Anthropic employees aren't limited by 5 hour 7 days limits anyway I suppose.
reply
You missed the part were everyone is going to run its own vibe coded assembly tools[1].

So the next step will be that bun will be directly re-written from scratch at every iteration, the repository will only contains the specs for the LLMs.

Caching locally the generated code will be authorized for some transition period, but as it’s obviously very dangerous to let people tweak what exactly computers are doing, forbidding such a practice using safe secure boot mandatory mode is already planed. Only nazi pedophiles would do otherwise anyway, thus the enactment of the companion law is an obvious go to.

[1] https://news.ycombinator.com/item?id=47997947

reply
Democratizing knowledge btw
reply
Rust is legit one of the best languages to "vibe code" in.

The emitted AST has a lower defect rate since it incorporates strong types and in-built error handling. Other pros include native code and portability, but downside is the compile time.

reply
Downside: CC and Codex will write, compile, and fix in a loop until it has a monstrosity rather than designing something smarter.
reply
This could be a subjective feeling with no real data to back it up.

People say same about Go as well that it's type system and limited feature set makes it the best AI friendly language but there too, it just seems like a hunch rather than a proven fact.

reply
The thing is that this argument doesn't work with Go because its type system (and the whole language, really) is much less expressive and compiler gives a lot less feedback to the LLM. So it tends to have to write more unit tests and do more cycles of testing (and spend more tokens) to get it right.
reply
The argument about type system is absurd anyway. The types in a program aren't a universal vocabulary that the LLM would already know about like the words of English language. They are unique to each program and domain so an LLM can't be better at it.

Let me elaborate further - it's like the proficiency of LLMs in writing English vs writing Sawahili or Kurdish.

The types of a program are like Swahili or Kurdish etc even worse because those languages still have sizeable chuck on the Internet and digital archives but types of a program are very specific to it.

reply
Studies have shown that natural human languages are all more or less equally expressive in terms of bits per second while speaking. There's lots of different ways they can be structured but they tend to follow common rules that have been well-characterized by linguists. They can be used to describe formal mathematical statements, but are not rigorously formal languages themselves.

Programming languages, in contrast, are constructed and vary much more in their designs. They are formal languages, making them closer to math than spoken language. LLMs being able to describe concepts more thoroughly and precisely through more expressive semantics obviously makes some languages more suitable than others.

The type system of a language is just one aspect of it that allows the language to provide guarantees to the LLM (and the user) about correctness of the code it's writing.

I am not speaking about specific types in specific programs. I am talking about the ability to describe complex constraints that LLMs (and humans) end up using to make writing correct code easier and more productive. Some programming languages absolutely are more effective at this than others, and that's always been true even before LLMs.

reply
If we are gonna go down that rabbit hole, then the natural conclusion is Haskell.
reply
How good are LLMs at understanding Haskell errors and then dealing with them?

The last time I had a go with Haskell, the errors reminded me so much of hellish terminal compilers from the 80s and 90s that I quickly gave up. Been there, not doing that again.

reply
Which seems pretty reasonable tbh. Claude Code is amazing with Elm in my experience.
reply
Well those people are simply wrong. Go and Rust type systems don't even remotely compare. Go types suck.
reply
Excellent comment.

As a downside, the compile time is somewhat offset once you're using agents (and especially parallel agents) anyway. Since all of your edits cost a round-trip API call to a third party server, you can accept a slightly slower compile step.

reply
Even if AI had not been used, the changes would not have been upstreamed, see https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio... tl;dr the supposed improvements are not sound and the zig compiler has already gotten a whole lot faster
reply
This should be the top comment in the whole thread. AI is not the point, the PR is just not of a good quality.
reply
What a sober, detailed forum post.
reply
Thanks, that is the answer.
reply
That is a devastating comment. I will now be extremely skeptical of bun.
reply
Anthropic just needs to buy Zig! Problem solved.
reply
Perfect A/B experiment opportunity. Fork Zig, call the fork Zag.

Lock the syntax/api together for a couple of years. Allow AI code in Zag.

Review after a few years, see which is better.

reply
Interesting experiment, would it actually function if Zag was syntax/api locked to Zig? I guess Zag could still have api extensions.
reply
Take off every Zig
reply
You know what you doing!
reply
reply
Wow. That xkcd was written in 2007, and part of the dialog is "didn't that [meme] die like five years ago?" Which means All Your Base, as a meme, was already getting somewhat stale by around 2002. It's hard to believe it's been that long.
reply
...and rewrite it in rs.
reply
Yeah, now that I think about it, having a major project written in a language that doesn't accept AI contributions now owned by a major AI company was a recipe for dis... er, conflict.

I'm not a huge fan of Rust, but I guess having a project like Bun in an actually memory safe language is probably a win? Guess it depends on how good Claude is at writing Rust code...

reply
I see that as a win for Zig.
reply
Read the previous discussions on the topic. Your summary is a sensationalist lie, since their change was apparently a smoking pile of hot garbage, and Zig already had similar performance gains in a newer release.
reply
deleted
reply
>They recently tried to upstream an improvement to zig, but were prevented from doing so because zig has a hard and fast "no AI code" rule.

And will Rust team accept their vibe coded patches?

reply
No. The Rust project developers are more lenient when it comes to developing patches with AI assistance, but the amount of leniency one receives is proportional to the amount of pre-existing trust a contributor has with the project, and every PR still has to be reviewed by an independent human. A stranger dumping a zillion lines of slop in a PR is a one-way ticket to having your PR politely closed.
reply
Very likely not, if they are of similarly low quality.
reply
> They recently tried to upstream an improvement to zig

They didn't.

reply
deleted
reply
seems easier to fork zig
reply
Then that becomes an ongoing effort. The rewrite is once. (Good idea or not)
reply
good, more reason to stay away from zig
reply
Stay away. Everyone wins.
reply
Probably moreso going with the native language that is reliable and battle tested. Rust runs on Firefox, and in production at several systems across major orgs, this is not surprising.
reply