- The docs.rs docs are still building, but the docs from the recent RC are available [0]
- The Slint project have an example of embedding Servo into Slint [1] which is good example of how to use the embedding API, and should be relatively easy to adapt to any other GUI framework which renders using wgpu.
- Stylo [2] and WebRender [3] have both also been published to crates.io, and can be useful standalone (Stylo has actually been getting monthly releases for ~year but we never really publicised that).
- Ongoing releases on a monthly cadance are planned
[0]: https://docs.rs/servo/0.1.0-rc2/servo
[1]: https://github.com/slint-ui/slint/tree/master/examples/servo
git clone https://github.com/simonw/research
cd research/servo-crate-exploration/servo-shot
cargo build
./target/debug/servo-shot https://news.ycombinator.com/
Here's the image it generated: https://gist.github.com/simonw/c2cb4fcb15b0837bbc4540c3d398c...It is the style I prefer to use Rust for. Coming from Python, Typescript and even Java, even with this high level Rust, it yields incredible improvement already.
Yeah that tracks because the AI is dumb as a bag of bricks. It can apply patterns off stackoverflow, but can hardly understand the borrow checker.
Do you know if Servo is 100% Rust with no external system dependencies? (ie, can get away with rustls only?)
Can this do Javascript? (Edit: Rendering SPAs / Javascript-only UX would be useful.)
Edit 2: Can it do WebGL? Same rationale for ThreeJS-style apps and 3D renders. (This in particular is right up my use case's alley.)
It should be able to render JavaScript but I've seen it throw bugs on simple pages, no doubt because my vibe-coded thing is crap not because Servo itself can't handle them.
In Rust, the chromiumoxide crate is a performant way to interface with it for screenshots: https://crates.io/crates/chromiumoxide
Do you mind elaborating on what features are missing?
If Anthropic wants marketing for Mythos without publishing it - show us servo contrib log or something like that. It aligns nicely with their fundamental infrastructure safety goals.
I'd trust that way more than x% increase on y bench.
Hire a core contributor on Servo or Rust, give him unlimited model access and let's see how far we get with each release.
At some point security becomes - the program does the thing the human wanted it to do but didn't realize they didn't actually want.
No amount of testing can fix logic bugs due to bad specification.
Each of the last 4 comments in your thread (including yours) are conflating what they mean by AI.
But my argument is that we can work to minimize the time we spend on verifying the code-level accidental complexity.
And we've had some succeses, but i wouldn't expect any game changing breakthroughs any time soon.
I'm sure we'll have vibed infrastructure and slow infrastructure, and one of them will burn down more frequently. Only time will tell who survives the onslaught and who gets dropped, but I personally won't be making any bets on slow infrastructure.
As a trivial example I just found a piece of irrelevant crap in some code I generated a couple of weeks ago. It worked in the simple cases which is why I never spotted it but would have had some weird effects in more complicated ones. It was my prompting that didn't explain well enough perhaps but how was I to know I failed without reading the code?
>We do not need vibe-coded critical infrastructure.
I think when you have virtually unlimited compute, it affords the ability to really lock down test writing and code review to a degree that isn't possible with normal vibe code setups and budgets.
That said for truly critical things, I could see a final human review step for a given piece of generated code, followed by a hard lock. That workflow is going to be popular if it already isn't.
Perhaps part of a complex review chain for said function that's a few hundred LLM invocations total.
So long as there's a human reviewing it at the end and it gets locked, I'd argue it ultimately doesn't matter how the code was initially created.
There's a lot of reasons it would matter before it gets to that point, just more to do with system design concerns. Of course, you could also argue safety is an ongoing process that partially derives from system design and you wouldn't be wrong.
It occurred to me there's some recent prior art here:
https://news.ycombinator.com/item?id=47721953
It's probably fair to say the Linux kernel is critical infra, or at least a component piece in a lot of it.
In the not so distant future you'll probably be one of the few who haven't had their actual coding skills atrophy, and that's a good thing.
Hiring a few core devs to work on it should be a rounding error to Anthropic and a huge flex if they are actually able to deliver.
So, should I trust an LLM as much as a C compiler?
That's not true for coding in general. The best you can do is having unreasonably good test coverage, but the vast majority of code doesn't have that.
Servo may not be the best project for this experiment, as it has a strict no-AI contributions allowed policy.
It's the maintenance. The long term, slow burn, uninteresting work that must be done continually. Someone needs to be behind it for the long haul or it will never get adopted and used widely.
Right now, at least, LLMs are not great at that. They're great for quickly creating smaller projects. They get less good the older and larger those projects get.
https://x.com/mitchellh/status/2029348087538565612
Stuff like this where these models are root causing nontrivial large scale bugs is already there in SOTA.
I would not be surprised if next generation models can both resolve those more reliability and implement them better. At that point would be sufficiently good maintainers.
They are suggesting that new models can chain multiple newly discovered vulnerabilities into RCE and privilege escalations etc. You can't do this without larger scope planning/understanding, not reliabily.
Replicating Rust would also be a good one. There are many Rust-adjacent languages that ought to exist and would greatly benefit mankind if they were created.
I read the link twice and no AI or LLM mentioned. I don't know why people are so eager to chime in and try to steer the conversation towards AI.
It takes some time to get used to their DSL to write PDFs, but nowadays with AI that shouldn't take too long.
Not sure if it's quite as good as TeX at typesetting, but it seems good enough. When I did my thesis, TikZ was even more valuable. I don't know if there's any replacement for that.
Electron = Node.js + CEF
Tauri = Rust + webview
Tauri has an experimental branch to use Servo to provide a bundled webview. Currently it relies on a system-level webview, like Edge on Windows, Safari on MacOS, and webkit-gtk on Linux.
Wait, crate versions go up to 1.0?
EDIT: Sorry, while crate stability may be an interesting conversation, this isn't the place for it. But I can't delete this comment. Please downvote it. Mods feel free to delete or demote it.
If version 0.7 turned out to hit the right API and not require backward incompatible changes, releasing a version 1.0 would be as disruptive as a major version change to your users and communicate through version semantics that it is a breaking change.
Semver declares that version 0.x is for initial development where there is no stability guarantee at all. This is the right semantics for a versioning system, but Cargo doesn't follow this part of semver. Providing stability guarantees throughout the 0.x cycle inevitably results in projects getting stuck in 0.x.
This is one of my biggest gripes with Cargo. But Rust people seem to universally consider it a non-issue so I don't think it'll ever be fixed.
That’s a feature of semver, not a bug :)
Long answer: You are right to notice that minor versions within a major release can introduce new APIs and changes but generally, should not break existing APIs until the next major release.
However, this rule only applies to libraries after they reach 1.0.0. Before 1.0.0, one shouldn’t expect any APIs to be frozen really.
> Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
Cargo is explicitly breaking with Semver by considering 0.3.5 compatible with 0.3.6.
In practice, there's no real issue with using the first non-zero component to define the group of API-compatible releases and most package managers agree on the semantics.
Eventually this will get cleared up. I’m close than I’ve ever been to actually handling this, but it’s been 9 years already, so what’s another few months…
Nope, this is what the semver trick is for: https://github.com/dtolnay/semver-trick
TL;DR: You take the 0.7 library, release it as 1.0, then make a 0.7.1 release that does nothing other than depend on 1.0 and re-export all its items. Tada, a compatible 1.0 release that 0.7 users will get automatically when they upgrade.
Even more interesting is that you can use this to coordinate only partially-breaking changes, e.g. if you have 100 APIs in your library but only make a breaking change to one, you can re-export the 99 unbroken APIs and only end up making breaking changes in practice for users who actually use the one API with breaking changes.
Such a stupid state of affairs.
If you didn't want people to depend on your package (hence the word "dependency") then why release it? If your public interface changes, bump that major version number. What are you afraid of? People taking your project seriously?
1.x communicates (to me at least) you are pretty happy with the current state of the package and don't see any considerable breaking changes in the future. When 2.x comes around, this is often after 1.x has been in use for a long time and people have raised some pain points that can only be addressed by breaking the API.
Because this comment, "The project is still in development, it might be stable enough for use in "real projects(tm)", but it might also still significantly change." That describes every project. Every project is always in development. Every project is stable until it isn't. And when it isn't, you bump the major number.
Nobody cares that Chrome's major version is 147.
By releasing a library with version 0.x, I communicate: "I consider this project to be under initial development and would advice people not to depend on in unless you want to participate in its initial development".
I don't understand why people find this difficult or controversial.
For example, sometimes projects that have a 0.y version get depended on a lot, and so moving to 1.0.0 can be super painful. This is the case with the libc crate in Rust, which the 0.1.0 -> 0.2.0 transition was super painful for the ecosystem. Even though it should be a 1.0.0 crate, it is not, because the pain of causing an ecosystem split isn't considered to be worth the version number change.
The only time you run into a problem is if you try and use values with a type from 0.1 with a function that takes a 0.2 as an argument, or whatever. Then you get a type error.
Easy, just add bloat code so it will use 5GB of RAM by default, that's instant adoption by MS.
In other words: an internet browser.
Most other parts of Servo were not mature enough to integrate at the time Mozilla decided to end support for the project and didn't look like they would be mature enough any time soon. The DOM engine for example was in the early stages of being completely rewritten at the time because the original version had an architecture that made supporting the entire breadth of web standards challenging.
Keep in mind that you can continue adding Rust to Firefox without replacing whole components. It's not like Mozilla abandoned the idea of using more Rust in Firefox just because they stopped trying to rewrite whole components from the ground up.
Mozilla laid off the full Servo team, but never publicly announced this afaik. Wikipedia includes it here: https://en.wikipedia.org/wiki/Firefox#cite_ref-120
Ladybird, by contrast, is a developer-lead open source project that has no such constraints. They also don't have a product yet but I'm sure the picture will be radically different in a few years.
Conway's law in action.
Not once in my career have I come across a problem that wasn't cultural. There are no purely technical problems in software. Everything can be achieved, everything can be worked around. All one need is a consensus. Enters cultural problems.
> The managers want to keep their jobs more than they want Firefox to succeed.
Coincidentally, also throughout my career, not once have I met an engineer that didn't put the entire blame on managers. Introspection really isn't our forte, is it? :)
Only recently when it moved over to the Linux Foundation has Servo started being worked on again
For an example of what I mean, see JetCrab: https://jetcrab.com
> Complete JavaScript execution pipeline from source code parsing to bytecode execution.
So it's a bytecode interpreter, not a JIT.
It might still be production ready for a bunch of use cases. I may use it as a scripting layer for some pluggable piece of software or a game. I wouldn't consider it appropriate for a "production ready web browser" which intends to compete with Firefox and Chrome.
EDIT: Also for some reason all its components are called v8_something? That's pretty off putting, you can't just take another project's name like that.. and from the author's Reddit comments it seems to be mostly AI slop anyway. I'm guessing Claude wrote the "production ready" part on the website, I wouldn't trust it.
We are in a thread discussing a Rust library, logically, I was referring to the current approach in GUI rendering in the Rust space (such as Tauri and Dioxus).
> and plenty of reasons to not want to use Blink/WebKit.
Such as? Can you name a few objective reasons against Blink/WebKit (the technology) that does not involve just not liking Google/Apple?
Tauri itself doesn't render web views. It uses wry under the hood. Dioxus isn't a web view at all and deserves a fundamentally different purpose.
> Can you name a few objective reasons against Blink/WebKit (the technology) that does not involve just not liking Google/Apple?
If you have a cross platform application, it sucks having to worry about which features work or don't work based on which engine is available and how old it is. You also don't know if there are user scripts being injected that are affecting the experience. It's impossible to debug and many users don't even know what browser engine is being used, they just know your app doesn't work.
If you build for Servo, it works exactly the same on every platform. You could use wry and test that Edge is good on Windows, WebKit works on the past few versions of Macos, gtk WebKit works, etc etc, or you can just use Servo.
Not to mention, Servo is probably much lighter than whatever flavor of chromium the user has installed under the hood.
As a user of a desktop environment other than gnome-shell, I only have webkitgtk-6.0 installed because I chose to install Epiphany—it’s a good proxy for testing on Safari, which Apple makes ridiculously expensive.