> The "immediate mode" GUI was conceived by Casey Muratori in a talk over 20 years ago.
Maybe he might have made it known to people not old enough to have lived through the old days, however this is how we used to program GUIs in 8 and 16 bit home computers, and has always been a thing in game consoles.
> To describe it, I coined the term “Single-path Immediate Mode Graphical User Interface,” borrowing the “immediate mode” term from graphics programming to illustrate the difference in API design from traditional GUI toolkits.
— https://caseymuratori.com/blog_0001
Obviously it’s ludicrous to attribute “immediate mode” to him. As you say, it’s literally decades older than that. But it seems like he used immediate mode to build a GUI library and now everybody seems to think he invented immediate mode?
Win32 GUI common controls are a pretty thin layer over GDI and you can always take over WM_PAINT and do whatever you like.
If you make your own control you musts handle WM_PAINT which seems pretty immediate to me.
https://learn.microsoft.com/en-us/windows/win32/learnwin32/y...
Difference between game engine and say GDI is just the window buffer invalidation, WM_PAINT is not called for every frame, only when windows thinks the windows rectangle has changed and needs to be redrawn independently of screen refresh rate.
I guess I think of retained vs immediate in the graphic library / driver because that allows for the GPU to take over more and store the objects in VRAM and redraw them. At the GUI level thats just user space abstractions over the rendering engine, but the line is blurry.
Handling WM_PAINT is no different from something like OnPaint() on a base class.
This was actually one of mindset shifts when moving from MS-DOS into Windows graphics programming.
The canvas api in the browser is immediate mode driven by events such as requestAnimationFrame
If you do not draw in WM_PAINT it will not redraw any state on its own within your control.
GDI is most certainly an immediate mode API and if you have been around long enough for DOS you would remember how to use WM_PAINT to write a game loop renderer before Direct2D in windows. Remember BitBlt for off screen rendering with GDI in WM_PAINT?
https://learn.microsoft.com/en-us/windows/win32/direct2d/com...
Computing left game development behind. Whilst the rest of the industry built shared abstractions, we worked in isolation with closed tooling. We stayed close to the metal because there was nothing else.
When Casey and Jon advocate for these principles, they're reintroducing ideas the broader industry genuinely forgot, because for two decades those ideas weren't economically necessary elsewhere. We didn't preserve sacred knowledge. We just never had the luxury of forgetting performance mattered, whilst the rest of computing spent 20 years learning it didn't.
I don't understand this part of your comment, it seems like you're replying to some other comment or something not in my comment. How am I overcorrecting? A statement of fact, that game developers didn't invent these things even though that's a common belief, is not an overcorrection. It's just a correction.
My bad. I think we're aligned on the history; I was making a point about why they're prominent advocates today (and why people are attributing invention to them) even though they didn't invent the concepts.
It seems like much of the shade is tossed at web front end like it's the only other domain of computing besides game end.
You're right that HFT, large-scale backend, and real-time systems care deeply about performance, often with far more money at stake.
But those domains are rare. The vast majority of software development today can genuinely throw hardware or money at problems (even HFT and large backend systems). Backends are usually designed to scale horizontally, data science rents bigger GPUs, embedded gets more powerful SoCs every year. Most developers never have to think about cache lines because their users have fast machines and tolerant expectations.
Games are one of the few consumer-facing domains that can't do this. We can't mandate hardware (and attempts at doing so cost sales and attract community disgust), we can't hide latency behind async, and our users immediately notice a 5ms hitch. That creates different pressures- we're optimising for the worst case on hardware we don't control whilst most of the industry optimises for the common case on hardware they choose.
You're absolutely right that we're often ignorant of advances elsewhere. But the economic constraint is real, and it's increasingly unusual.
A browser like Chrome also rests on a rendering engine like Skia, that has been optimized to the gills, so at least performance can be theoretically fast.
Then one tries to host static files on a express webserver, and is suprised to find that a powerful computer can only serve files at 40MB/s with the CPU at 100%.
I would like to think that a 'Faustian deal' in terms of performance exists - you give up 10,50,90% of your performance in exchange for convenience.
But unfortunately experience shows there's no such thing, arbitrarily powerful hardware can be arbitrarily slow.
And as you contrast gamedev to other domains who get to hide latency, I don't think its ok that a simple 3 column gallery page takes more than 1 second to load, people merely tolerate this not enjoy it.
And ironically I find that a lot of folks end up optimizing their React layouts way more than what it'd have cost to render naively with a more efficient toolkit.
I am also not sure what advances game dev is missing out on, I guess devs are somewhat more reluctant to write awful code in the name of performance nowadays, but I'd love to hear what advances gamedev could learn from the broader software world.
The TLDR version of what I wanted to say, is I wish there was a linear performance-convenience scale, where we could pick a certain point and use techniques conforming to that, and trade two thirds of the max speed for dev experience, knowing our performance targets allow for that.
But unfortunately that's not how it works, if you choose convenience over performance, your code is going to be slow enough that users will complain, no matter what hardware you have.
But game dev, in particular Mike Acton, did an amazing job of making it more broadly known. His CppCon talk from 2014 [0] is IMO one of the most digestible ways to start thinking about performance in high throughput systems.
In terms of heroes, I’d place Mike Acton, Fabian Giesen [1], and Bruce Dawson [2] at the top of the list. All solid performance-oriented people who’ve taken real time to explain how they think and how you can think that way as well.
I miss being able to listen in on gamedev Twitter circa 2013 before all hell broke loose.
[0] https://youtu.be/rX0ItVEVjHc?si=v8QJfAl9dPjeL6BI
Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...
That's exactly what they did :D
> Graphical user interfaces traditionally use retained mode-style API design,[2][5] but immediate mode GUIs instead use an immediate mode-style API design, in which user code directly specifies the GUI elements to draw in the user input loop. For example, rather than having a CreateButton() function that a user would call once to instantiate a button, an immediate-mode GUI API may have a DoButton() function which should be called whenever the button should be on screen.[6][5] The technique was developed by Casey Muratori in 2002.[6][5] Prominent implementations include Omar Cornut's Dear ImGui[7] in C++, Nic Barker's Clay[8][9] in C and Micha Mettke's Nuklear[10] in C.
https://en.wikipedia.org/wiki/Immediate_mode_(computer_graph...
[Edit: I'll add an update to the post to note that Casey Muratori simply “coined the term” but that it predates his video.]
And you will see which information is more accurate.
Yes, he coined the term rather than invent the technique
It was a swinging pendulum. At first everything was immediate mode because video RAM was very scarce. Initially there was only enough VRAM for the frame buffer, and hardly any system RAM to spare. But once both categories of RAM started growing, there was a movement to switch to retained mode UI frameworks. It wasn’t until the early 00’s that GPUs and SIMD extensions tipped the scales in the other direction - it was faster to just re-render as needed rather than track all these cached UI buffers, and allowed for dynamic UI motifs “for free.”
My graying beard is showing though, as I did some gave dev in the late 90’s on 3Dfx hardware, and learned UI programming on Win95 and System 7.6. Get off my lawn.
I also came to a similar endpoint when building out a fairy large GUI application using egui. While egui solves the "draw widgets" part of building out the application, inevitably I had to restructure my app entirely with a new architecture to make it maintainable. In many places the "immediate" nature of the GUI mutable editing the state was no longer an advantage. Not to mention that UI code I wrote 6 months ago became difficult to read, especially if there was advanced layout happening.
Ultimately I've boiled my choices down to:
- egui for practicality but you pay the price in architecture + styling
- iced for a nice architecture but you have to roll all your own widgets
- slint maybe one day once they make text rendering a higher priority but even then the architecture side is not solved for you either
- tauri/dioxus/electron if you're not a purist like me
- Rewind 20 years and use Qt/WPF/etc.
Down the stack, low-level 3D acceleration is in a rough spot too unfortunately. The canonical Rust Vulkan wrapper (Ash) hasn't cut a release for nearly two years, and even git main is far behind the latest spec updates.
IIRC there is another raw vulkan library that just generated bindings as well and stayed up to date but that comes with its own issues.
WGPU + Winit + EGUI + EGUI component libs is its own joy of compatibility, but anecdotally they have been updating in reasonable sync. things can get out of hand if you wait too long between updates though!
https://github.com/vulkano-rs/vulkano/blob/master/Cargo.toml...
Maybe that's so they can interop with other crates which use Ash's types?
The C++ equivalent, Vulkan-Hpp[2], follows extremely closely behind. Plus, ash isn't just an FFI wrapper; it does quite a bit of RAII-esque state and function pointer management that is generally required for Vulkan.
[1]: https://github.com/KhronosGroup/Vulkan-Docs/blob/main/xml/vk...
The thing you get by using an OS widget and putting a string in it is that the OS can interact with the string. It can read it out load, translate it, fill it in with a password, look it up in a dictionary, edit it right to left, handle input method editors whose hot keys are in conflict with app doing its own editing, etc…
There’s a reason why the most popular ImGUIs are targeted at game dev tools and in game dev uis and not end user uis
You could potentially make an Immediate mode gui that wrapped a retained gui. arguably that is what react is. From the programmers pov it’s supposed to look like imgui code all the way down. It runs into the issues of having to keep to two representations in sync. The ui represented by react and the actual widgets (html or native) and that’s where all its complications come from
[Note that Tritium at least is translated into a number of a different languages. That part isn't that hard.]
and this: https://lord.io/text-editing-hates-you-too/
those are both things most ImGUIs ignore. And, even if you pick some library that somehow handles the first you're left with all of the issue mentioned above.
To be clear, if I was writing a devtool (and I am actually) i'd reach for an ImGUI (and I did). But I'd be unlikely to use one for user facing tool.
To be honest, I've been (slowly) working towards my own native GUI library, in C. It's a big undertaking, but one saving grace is that --- at least on my part --- I don't need the full featureset of Qt or similar.
My plan for the portability issue is to flip the script --- make it a native library that can compile to the web (using actual DOM/HTML elements there, not canvas/WebGL/WGPU). And on Android/iOS/etc, I can already do native anyway.
Though I should add that a native look is not a goal in my case (quite a few libraries already go for that, go use those! --- and some, like Windows, don't really have a native look), which also means that I don't have to use native widgets on e.g. Android. The main reason for using DOM on the web is to be able to provide for a more "web-like" experience, to get e.g. text selection working properly, as well as IME, easier debuggability, and accessibility (an explicit goal, though not a short-term one --- in part due to a lack of testers). Though it wouldn't be too much of a stretch to allow either canvas or DOM on the web at that point --- by treating the web the same as a native platform in terms of displaying the widgets.
It's more about native performance, low memory use, and easy integration without a scripting engine inbetween --- with a decent API.
I am a bit on the fence between an immediate-mode vs retained-mode API. I'll probably do a semi-hybrid, where it's immediate-y but with a way to explicitly provide "keys" (kind of like Flutter, I think?).
A mature high-quality GUI with support for all the features of a modern desktop UI, accessibility, support for all the display variations you encounter in the wild, high quality rendering, high performance, low overhead, etc. is a development task on par with creating a mature game engine like Unity.
Nearly all open source GUI projects get 80% of the way there and stall, not realizing that they are only 20% of the way there.
Then you start to think about full unicode support, right-to-left rendering, and so on. Then you start to think about properly implementing accessibility features. The necessary work increases by a magnitude. And it's not fun work. So you stall out with a bare-bones implementation.
Do you have a source for this?
I started writing a program that needed to have a table with 1 million rows. This means it needs to be virtualised. Pretty common in GUI libraries. The only Rust GUI library I found that could do this easily was gpui-component (https://github.com/longbridge/gpui-component). It also renders text crisply (rules out egui), looks nice with the default style (rules out GTK, FLTK, etc.), isn't web-based (rules out Dioxus), was pretty easy to use and the developers were very responsive.
Definitely the best option today (I would say it's probably the first option that I haven't hated in some way). The only other reasonable choices I would say are:
* egui - doesn't render very nicely and some of the APIs are amateurish, but it's quick and it works. Good option for simple tools.
* Iced - looks nice and seemed to work fairly well. No virtualised lists though.
* Slint (though in some ways it is weird and it requires quite a lot of boilerplate setup).
All the others will cause you pain in some way. I think the "ones to watch" are:
* Makepad - from the demos I've seen this looks really cool, especially for arty GUI projects like synthesizers and car UIs. However it has basically no documentation so don't bother yet.
* Xilem - this is an attempt to make an 100% perfect Rust GUI library, which is cool and all but I imagine it also will never be finished.
Beyond egui/Iced/Slint, I'd say the "ones to watch" are:
* Freya
* Floem
* Vizia
I think all three of those offer virtualized lists.
Dioxus Native, the non-webview version of Dioxus is also nearing readiness.
Except the above virtualised lists, another case I hit was layered images (sprites for example). Not very hard to write my own, sure, but it’d be nice to have that out of the box as in eg. egui
Focus ebbs and flows at Zed, they'll be back on it before long.
Actually, this story is literally them changing their renderer on linux, so they are maintaining it.
> except to the extent contributions align with its business mission
Isn't that every single open source project that is tied to a commercial entity?
Do you know how well gpui-component supports typical use cases like that? Edit boxes, buttons, scroll views, tables, checkbox/radio buttons, context menus, consistent native selection and clipboard support, etc. are table stakes for desktop apps.
I could see more components being shipped first party if the community took over gpui, or for some crazy reason a team was funded to develop gpui full time, but developing baseline components is an immense amount of work, both to create an maintain.
Buttons (any div can be a button), clipboard, scroll views (div, list, uniform_list) should all already be in gpui.
All of those are handled. Run the "story" app. It is very impressive IMO.
Components list: https://longbridge.github.io/gpui-component/docs/components/