[0] https://ocaml.github.io/ocamlunix/index.html
[1] https://matt.might.net/articles/best-programming-languages/#...
There's also this paper, "Memory Safety in Ada, SPARK, and Rust"
https://www.adacore.com/papers/memory-safety-in-ada-spark-an...
https://blog.adacore.com/should-i-choose-ada-spark-or-rust-o...
A programming language is always going to make some kind of compromise; better at some things, worse at others.
Simplicity/power and safety pull the design in different directions.
> Beautiful is better than ugly.
> Explicit is better than implicit.
> Simple is better than complex.
> Readability counts.
What really sets Zig apart from the usual suspects (C, C++, Rust) is that it has first class compile-time reflection capabilities built into the language (comptime, @field, @typeInfo, etc.) rather than bolted on as macros or templates.
But at the same time, I'm pretty sure that smaller/simpler is going to mean less safe.
I don't think the "complicated" is part of the philosophy. Rather the idea is that a "strong" type system should ensure safety. The general consequence of this however is that the language becomes quite restricting and limiting. Hence the need for more more complex feature that allow for greater expressibility.
> Austral is a new systems programming language. It uses linear types to provide memory safety and capability-secure code, and is designed to be simple enough to be understood by a single person, with a focus on readability, maintainbility, and modularity.
I absolutely adore Ada [1], but the Pascal-syntax isn't for everyone.
I haven't used it yet, but have been hearing rumblings about Odin [2] a fair bit in these kinds of discussions. I tend to avoid too many symbol-heavy langs, but it's probably still less than Rust (I use a screenreader half the time).
Not saying you in particular, but it's weird, how too many won't tolerate running into some problems when they knowingly chose something still in beta. Maybe why JBlow refuses to release his beta to the public. Actually, it can be a good thing for the language, by people filing bug reports (so they can get fixed) or to do a PR (if they are at that technical level).
(I have no idea what the current state is)
I think that's a fair thing to say. But in all honesty, the people pointing that out were not exactly polite about it, to put it mildly.
I never really looked at V myself and have no opinion, but I do know unpleasant behaviour when I see it and quite a few of the people "pointing out" some of the shortcomings of V were engaging in it.
As for concerns... The main developer is a concern. Hard to trust them to support the language well, with some of the... Well, tantrums. This isn't aimed at a personal attack. But it is very hard to describe their responses in another manner.
This [0] thread on HN covers some of all of the above.
But, probably also important to point out that V and its drama have had dang threaten to ban the topic altogether [1]. There's a lot of drama.
If, of course, by "demons" you mean people who write nicely formatted, polite, well-written responses attempting to answer your question fully, including links to further reading.
The comment wasn't about anyone in particular.
See this article and the discussions, and if you’re so inclined follow along links in that thread from last time that show folks from the vlang team flaming all over HN.
> But I've had a few issues with the safety aspects of the language. Breaking through the checker isn't that difficult.
What do you mean? Can you "break through the checker" outside of unsafe blocks?
But there was a time where autofree was just a compiler flag and wasn't actually implemented, even though the author claimed that it was.
Curious if there are any D users here who try to stay within that intersection?
The bigger problem is guaranteeing concurrency safety, which you can only provide with nonblocking IO. Everybody loves blocks though, so that's far away.
That depends. It is really a Hard NO for GC?
Crystal is low level enough with a GC and people have written a complete OS with it. On my mobile now so I can't provide link. But it was featured on HN a while ago.
One project I’ve seen that I don’t think is particularly active but that I really like the ideas behind is borgo. It compiles to go (thus GC) but is decidedly more rustacean.
Check it out. I hope someone makes something like this wide spread.
PS. I have no affiliation with borgo, just an online fan.
What intrigues me quite a bit relative to the main topic of the HN thread is Low[0]
> Low is not only a language subset, but also a set of F* libraries that model a curated set of C features: the C memory model, stack- and heap-allocated arrays, machine integers, C string literals, and a few system-level functions from the C standard library. Writing in Low, the programmer enjoys the full power of F for proofs and specifications. At compile-time, proofs are erased, leaving only the low-level code to be compiled to C. In short: the code is low-level, but the verification is not.
[0] https://fstarlang.github.io/lowstar/html/Introduction.html#t...
For memory safety compared with Rust it does not use a borrow checker for the price of more checks and extra memory usage at runtime while avoiding GC or even reference counting.
I think this is very interesting trade off. Plus there are plans to implement few interesting ideas that even Rust type system cannot implement while keeping the language simple.
Modula-2 (e.g. GCC).
Oberon (e.g. Oberon, Oberon 2, Oberon-07, Oberon+.)
How about Cyclone?
Just remember that even if writing memory-safe programs is your goal, using a memory-safe language is just a means to that goal, and even Rust isn't really memory-safe. Many Rust programs try to achieve memory safety while using a non-memory-safe language (be it C or unsafe Rust), and there's an entire spectrum of combining language features that offer sound guarantees with careful analysis and testing of unsafe code to achieve a safe program. On that spectrum, Zig is much closer to Rust than to C, even though it offers fewer guarantees than Rust (but more than C).
This is primarily an educational exercise to see how people find compromises that work for them, and languages in the same space as Rust using alternative strategies.
Is this something you experience when writing your code, or is this something you experience when reading other people's code?
If it's the former, I'd really love to hear more about those experiences.
If it's the latter, are there particular features that crop up that make code feel like too much?
(To be clear, Rust isn't perfect for everyone, despite our best efforts. And if you want to work with another language, you should! I'm not looking to defend it; your experiences are valid. We'd love to make Rust better, so I didn't want to miss the opportunity to ask, because we so rarely hear from people in the intersection of "I love Rust" and "Rust is too much".)
I feel like my biggest struggle is simply how hard (tedious?) it is to properly work with generics and some more complex applications of traits. Any time I am working with (especially writing, somehow reading is easier) these, I always take an ungodly amount of time to do anything productive.
I am almost certainly sure this is a skill issue -- I am simply not "Rusting" like I am supposed to. Maybe I overuse generics, maybe I rely on traits too much, maybe I am trying to abstract stuff away too much. But this is one of the reasons I want to also explore other languages in the space, especially ones which make it impossible for me to make this so complex.
Don't get me wrong -- all of this complexity is a joy to work with when I can use my brain. But sometimes, it's just too much effort to do stuff, and it feels like I could be getting away with less. Curiously, I never had a problem with the borrow checker, even though people often complaining about how much it forces them to "stop and think".
Another thing is that C, for some weird reason, always feels "lower level" than Rust, and seeing it gives me some sort of weird satisfaction that Rust does not. Maybe it's just a greener grass syndrome, but wanted to mention it nonetheless.
All this said, I want to just emphasise, that despite this shortcoming (if it even is one), if I were forced to choose a language to spend the rest of my life with, I would not be the least bit sad to only ever use Rust again. I absolutely love it.
> Maybe I overuse generics, maybe I rely on traits too much, maybe I am trying to abstract stuff away too much.
Is this related to the problem where, if you want to put a generic in a data structure (e.g. an implementation of `Write`), you find yourself propagating a generic bound up an entire hierarchy of data structures and functions using those structures?
Asking because one thing that's common amongst Rust developers is a bit of an aversion to using `dyn Trait` types, and using a `Box<dyn Write>` (for instance) in the right place can wildly simplify your program.
> But sometimes, it's just too much effort to do stuff, and it feels like I could be getting away with less.
The next times you find this feeling arising, please feel free to reach out, if you'd like; I would genuinely love to hear about it. You can reach me by DM on the Rust Zulip, or by the email in my profile.
> Another thing is that C, for some weird reason, always feels "lower level" than Rust, and seeing it gives me some sort of weird satisfaction that Rust does not. Maybe it's just a greener grass syndrome, but wanted to mention it nonetheless.
I was originally a C developer, and I can sympathize. I don't tend to crave that "lower level" feeling often, but for me, there was something satisfying about C in the old DOS days, where you could make a pointer to video memory and scribble on it.
> you find yourself propagating a generic bound up an entire hierarchy of data structures and functions using those structures
And I did exactly that. I did eventually get around to using dyn Write, but that still gave me headaches because of how I cannot use impl Write as a result type in closures, which I need to do if I want to use tracing_subscriber::fmt::with_writer() and pass in these trait objects.
Despite being this close to the solution, I somehow again wound up propagating generics back at least four functions.
I ended up not writing any generic-based stuff and resigned to just manually writing each type's implementation out, but I am going to tinker with this again today. Hopefully, I can use your advice.
Thank you so much for taking the time to write this. Means a lot!
> there was something satisfying about C in the old DOS days, where you could make a pointer to video memory and scribble on it
Exactly.
I hope we manage to fix that! We're working on making `impl Trait` usable everywhere a type works. I'll check where "return type of a closure" is on the `impl Trait` radar.
https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lan...
> which I need to do if I want to use tracing_subscriber::fmt::with_writer() and pass in these trait objects.
Would https://docs.rs/tracing-subscriber/0.3.19/tracing_subscriber... help you?
> Thank you so much for taking the time to write this. Means a lot!
Thank you as well for taking the time to fill in more details!
I think even moreso I can understand the sense of Rust feeling too large not because the language itself is, but because there are SO many of these neat tricks and little utility functions that are necessary to express what you want, and it's quite impossible to remember each and every one and how to go back and find them later.
> Would https://docs.rs/tracing-subscriber/0.3.19/tracing_subscriber... help you?
Indeed. This is brilliant.
I never worked with low level languages except some university lession, but I think I have good general understanding how a CPU and memory works (just for some context)
But even for educational purposes, using a language with a poor selection of libraries is likely to lead to a bad experience if what you want to produce is working, non-trivial software. Every project includes some "boring" aspects -- such as parsing configuration and data files -- that you won't necessarily enjoy writing from scratch. The overall programming experience is shaped by much more than the design of the language alone.
One thing that's important to notice, I think, is that low-level languages' combined market share has fallen sharply since the 1970s, and it doesn't seem that the trend is about to sharply reverse direction. To me that suggests that if a low-level language wants to be more popular than C++, it should focus on low-level programming and shouldn't try to also be a good applications programming language (as C++ has tried to do that, but the overall market share of C and C++ is lower now than it was in, say, 1990), but I could be wrong about that.
[Heavy citation needed]
Rust isn't memory safe if and only if:
- You messed up writing safe to unsafe interface (and forgot to sanitize your binary afterwards).
- You tripped one of the few unsound bugs in compiler, which either require washing pointers via allocators or triggering some tangle of traits that runs into compiler bugs
- Somewhere in dependecy tree someone did the one of other things.
That is the definition of a language not being memory-safe. Memory-safety in the language means a guarantee that the resulting program is memory safe, and that you could not mess it up even if you tried.
Taking that to the extreme, it's like saying that a C program isn't memory safe only if you mess up and have UB in your program, something that C program must not have. But C is not a memory safe language precisely because the language doesn't guarantee that. My point is that there's a spectrum here, the goal isn't memory-safety in the language but in the resulting program, and that is usually achieved by some combination of sound guarantees in the language and some care in the code. Of course, languages differ in that balance.
By your definition no language ever would be deemed safe. Even Java/C# has to interface with C. Or you have to write bindings for C libs/ kernel calls.
> But C is not a memory safe language precisely because the language doesn't guarantee that.
C isn't memory safe because it has 212 different ways to cause memory unsafety. And only offers externals runtime tools to deal with it.
I mean Rust will never be perfect due to Rice Theorem. It doesn't have to be either. It's at close to ideal as you can get, without mandating that programmers are perfect (no language errors) or that everything be written in safe Rust (no C bindings).
This is a well known Nirvana fallacy. E.g. If a cure doesn't cure fatal disease in 100% of cases why not let disease take its course?
That's correct. My point is that even if we talk about memory safety only, languages are on a spectrum (e.g. Java programs don't need to use unsafe code as much as Rust programs), and there's always some situations where we don't rely on sound guarantees. In practice, we call languages that easily demarcate their unsafe code "safe languages".
> I mean Rust will never be perfect due to Rice Theorem.
That's nothing to do with Rice's theorem. A language that's completely, 100% memory-safe is not hard to do, but it will need to sacrifice some things that we don't want to sacrifice even for a 100% guarantee of memory safety.
> If a cure doesn't cure fatal disease in 100% of cases why not let disease take its course?
That's not a good analogy for software correctness. A better one would be that every cure has some side-effects. There are languages that, at least in principle, "cure" far, far more than Rust or even Java do, such as ATS or Idris. Why don't we always use them? Because it's hard! What we've gradually learned since the 70s (when a prevailing thought was that we'll have to use proofs to scale software) is that the cost of soundness can be high, and unsound methods (such as tests) are surprisingly effective in practice, and so "more sound guarantees" is not always the best answer. We now believe that for the best correctness per unit of effort we don't want no guarantees at all as in C, and we also don't want to guarantee everything as in ATS, but we're looking for some sweet spots in between. We haven't yet figured out what "the best" sweet spot is or, indeed, if there even is only one.
So are cars, there are cars that are a deathtrap, that send the steering wheel through your guts, and there are cars with seatbelts and airbags that won't save you from others or your own errors, but will minimize it. By your logic, I assume you're driving a Ford Pinto (Zig - has seatbelts but will explode at slightest touch).
You still wouldn't remove the seatbelts because they "chafe", or breaks because it's too much useless mass.
> In practice, we call languages that easily demarcate their unsafe code "safe languages".
That's not true. See Java, it doesn't demarcate unsafe code, you use the unsafe package. Granted, JNI looks about as weird
What in practice most people call safe languages are languages that preserve or have a high likelihood (looking at Go's memory model during data races) to preserve the spatial and temporal memory safety invariant, in absence of implementation errors.
> That's nothing to do with Rice's theorem
Wdym? The presumption is, you don't have a runtime, and you want safety. That means static checks. That means proving properties of your code, implying Rice's theorem.
> A language that's completely, *100% memory-safe is not hard to do.*
> My point is that even if we talk about memory safety only, languages are on a spectrum (e.g. Java programs don't need to use unsafe code as much as Rust programs), and there's always some situations where we don't rely on sound guarantees.
Huh? These two claims are in opposition. If it's easy (doable) to go 100% memory safe, then there should be a trivial way to always rely on memory guarantees. No need for a spectrum.
The spectrum exist because rather than writing OS in Ada, programmers collectively decided to bury their heads into C.
> That's not a good analogy for software correctness
It's not supposed to be a good analogy for software correctness, it was an example of Nirvana fallacy.
> There are languages that, at least in principle, "cure" far, far more than Rust or even Java do, such as ATS or Idris. Why don't we always use them?
Another Nirvana fallacy. On face value, this is an argument to move more to Rust, ATS, Ada and Idris, than it is to abandon them. Or for Rust to get linear types.
That said, in practice I understand these are proof languages with no ecosystem, that might reduce the likeliness of being used in production. But Rust has no such issues. It has a minor proof requirement (make sure your lifetimes are in agreement with the borrow model).
> Because it's hard!
If you avoid things that are hard, you'll never grow. If a programming language you learned didn't cause headaches, it's not a programming language, it's a dialect.
The use of Unsafe in Java is demarcated, and besides, it is now being removed altogether, but we have more interesting things to talk about.
> That means static checks. That means proving properties of your code, implying Rice's theorem.
It doesn't. Rice's theorem talks about the undecidability of proving some arbitrary property of arbitrary programs, but given a property, one can construct a language in which it trivially holds for all programs in the language.
This is done by finding an inductive invariant -- one that is preserved by all program operations -- that implies the property. Let me give you an example. Due to Rice's theorem, it is undecidable whether or not the output of an arbitrary program that yields a number output is even. Nevertheless, it's easy to construct a language (even a Turing-complete one) where this property always holds. We find a stronger invariant -- all numbers in the program are even -- that can be easily made inductive. Indeed, even numbers are closed under addition, subtraction, and multiplication, while for every division operation we add a runtime check that panics if the result is odd (and, of course, the compiler rejects any literal that isn't even). And voila! Not every program that yields an even output can be written in this language, but every program in this language yields an even output.
To see that this is what Rust does with memory safety, note that while every program in safe Rust is memory-safe, not every memory-safe program can be written in safe Rust.
> If it's easy (doable) to go 100% memory safe, then there should be a trivial way to always rely on memory guarantees. No need for a spectrum.
The need for the spectrum arises because we want to sacrifice 100% memory-safety for other things, such as Java's ability to call C code.
> The spectrum exist because rather than writing OS in Ada, programmers collectively decided to bury their heads into C.
As someone who wrote safety-critical avionics software in Ada in the nineties, I can tell you that we had good reasons to largely abandon Ada.
> an example of Nirvana fallacy.
But it isn't because you assume that a language that makes more guarantees will always produce more correct programs per unit of effort, but we know that to be false.
From the vantage point of ATS and Idris, a language like Rust is almost indistinguishable from C. There are a few more properties it guarantees than C does, but ATS/Idris can be used to guarantee any program property. If what I said is a nirvana fallacy, then you must not use Rust and always use ATS, because it is much better on your definition of better. The reason we don't is that we know more sound guarantees are not always better, but we didn't always know that.
In the seventies, software correctness researchers (such as Tony Hoare) assumed that the only path to scaling software would be soundness (proofs, and even formal proofs). As time went on, we learnt that this belief was false from both sides: unsound methods proved surprisingly effective [1], and proofs proved costly to scale, to the point that they made achieving a required level of correctness more costly.
> If you avoid things that are hard, you'll never grow. If a programming language you learned didn't cause headaches, it's not a programming language, it's a dialect.
The problem is not in learning new languages. The problem is that more sound guarantees can sometimes reduce the cost of correctness and sometimes increase it.
Don't get me wrong: Soundness and proofs sometimes work very well, but we in the software correctness/formal methods world know that things are much more complicated than "more soundness more better".
You can read some of my writings on software correctness, after a couple of decades working with formal methods here: https://pron.github.io
If you call importing a package and using it demarcated, I have a bridge and some fog underneath to sell you.
> This is done by finding an inductive invariant -- one that is preserved by all program operations -- that implies the property.
This is a strawman fallacy, we're not talking purely theoreticals. Just because you can build a model of Empire state building from Legos doesn't mean that it will work for the real thing. Your Even Theorem Prover can't write 100 beers on the wall. Or even output Hello <name> for arbitrary name.
Rust, however, can maintain invariants and be expressive enough because it relies on outside the computer assistance (aka unsafe blocks).
> As someone who wrote safety-critical avionics software in Ada in the nineties, I can tell you that we had good reasons to largely abandon Ada.
As an outsider looking at commercial and military avionics failures (I can't be certain, but my guess is it seems cost-cutting on both hardware and software didn't quite pan out), I'm not sure that the right call was made.
> But it isn't because you assume that a language that makes more guarantees will always produce more correct programs per unit of effort, but we know that to be false.
Again, [citations needed]. Google mentioned they had little problem getting people to work with Rust. The onboarding time was around 3 months.
And statements that borrow checker is hard is no different, than saying, well, types are hard, generics are hard, covariance & contravariance are hard, indexes starting at 0 are hard, ones and zeroes are hard. To me, these are complaints of people that didn't run the gauntlet, and didn't build their "muscle memory" in programming.
> But it isn't because
The nirvana fallacy is the informal fallacy of comparing actual things with unrealistic, idealized alternatives.[1] It can also refer to the tendency to assume there is a perfect solution to a particular problem. A closely related concept is the "perfect solution fallacy".
You are using ATS and Idris as examples, and I agree, we should use them more. In fact, Rust extensions to generate proofs are amazing in my opinion, and both of those languages are remarkable. What they lack however is an ecosystem. Rust has that.What does Zig give you? No ecosystem, no stability guarantees, and worse memory safety.
No, this is the technique we use in practice in formal methods to prove program properties. When we write a program correctness proof in TLA+ or Lean, this is what we do. This is also how we build invariants into languages. When you learn how to write formal proofs and design type systems, this is what you learn.
> Rust, however, can maintain invariants and be expressive enough because it relies on outside the computer assistance (aka unsafe blocks).
Rust maintains its invariants by making a stronger invariant inductive in the language.
> Google mentioned they had little problem getting people to work with Rust.
I am not talking about Rust in particular. I'm talking about your general point that more soundness is always better, and your claim that my argument is an example of a nirvana fallacy because I compare an imperfect solution to a hypothetical perfect one. I am trying to explain that we know that more soundness is sometimes worse for correctness in practice. The problem is not that more soundness isn't perfect, but that there are currently better solutions than more soundness in some situations.
You seem to be unaware of the shift in software correctness since the seventies. The gap between the size of programs we can prove correct using deductive methods (currently ~10KLOC) and the size of programs produced in industry is only growing. That is why there's been a shift from deductive proofs to methods such as concolic testing (e.g. KLEE). You can see that by looking at the programs for any software verification / formal methods conference.
To be clear, I am not saying that more soundness is never better; I'm saying it is sometimes worse. You can say that we should use ATS more, but the effort of producing software that's proven correct with ATS is not justified by the additional correctness guarantees you gain. I.e., at some point you pay $1000 for improved confidence worth $100.
A program that can't express H (72 in decimal) in Hello world is about as divorced from practicals as you can get, without mixing String theory, Meth and Math.
> Rust maintains its invariants by making a stronger invariant inductive in the language.
Yes, but it relies very little on human verification.
> I'm saying it is sometimes worse.
And I'm saying it as a programmer field we should monotonically approach the ideal by removing languages that don't support it. A language that has memory safety issues is worse than a language that doesn't. We had hundreds of example in practice where people just can't write/compose safe code.
C has a 50 year history and most of the active projects in it are mired in memory safety issues.
Also seatbelts are sometimes worse, doesn't mean we shoud abolish them for seatbelts-less Pintos.
Huh? I was giving an example of how we make undecidable properties trivial in languages. It can be any property. It's easy to see that Rust does the same with memory safety, i.e. through an inductive invariant, because every statement in Rust maintains the invariant.
> Yes, but it relies very little on human verification.
Of course. It shouldn't rely on human verification at all. That's how we design type systems. But because an inductive invariant is inherently conservative and must reject correct programs, there is additional human effort required -- not to verify the property, but to stay within the confines of the inductive invariant. In Rust terms, the compiler takes care of memory safety, but you need to put an effort into expressing memory-safe programs in Rust to fit within Rust's constraints, which are, necessarily more restrictive than memory safety (because they must use an inductive invariant).
> And I'm saying it as a programmer field we should monotonically approach the ideal by removing languages that don't support it.
And I'm saying that what we've learnt over the past five decades is that more soundness does not necessarily mean getting closer to the ideal. Sometimes more soundness gets us further away from the ideal.
> Also seatbelts are sometimes worse, doesn't mean we shoud abolish them for seatbelts-less Pintos.
No one is suggesting that, it's just that what you're referring to as seatbelts could sometimes do more harm than good even for safety. We have empirical evidence that real seatbelts increase safety; things are nowhere near as clear for language soundness.
Sure, C is so exceptionally weak that we can do better than that, and there's no doubt Rust is better on correctness than C, but that's not the same as saying Rust is better on correctness than any language that makes fewer guarantees than Rust. Why? Because, again, we've seen that the benefits of soundness are not linear -- they help until they start hurting. Knowing where that point is is not easy.
C++ would be great if people didn't use C style de-reference or arrays, used smart pointers, and avoided "turing complete" template bullshit. It ironically would be almost as memory safe as Rust.
On the contrary, you can also make C very memory safe by avoiding a few patterns.
Yeah, and this points at a deeper issue: the concept of a language being either (binary) memory safe or not does not really make sense. Memory safety is a spectrum and most GC'd languages are safer than Rust (but you won't see a lot of "memory safety" proponents acknowledge this).
Also, there are mitigations that can make "unsafe" languages as secure as something like Rust, but those are deliberately overlooked and hence languages like Zig are constantly bashed using security as the reason.
¹https://0xc0ffee.netlify.app/osdev
I, too, have been looking for a unicorn systems programming language, but it doesn't exit yet. Here's what I looked at so far (only languages that don't rely on a GC):
- C: lots of UB, less safe (memory- and type-wise)
- C++: too complex, not a big fan of OO
- Rust: too complex, gets in the way when writing mostly unsafe kernel code
- Zig: Good, but syntax is too noisy; lacks interfaces/dynamic dispatch
- Swift: Doesn't have a good bare metal story
- D: Seems interesting, I may be looking at it (although need to use betterC mode, i.e. not all language features)
- Odin: Game programming focused, small but growing community
- Ada: Strong candidate, but it has too much ceremony
- Pony: I like the capabilities model, but tiny community
- Hare: Also tiny community, lacks polymorphism
- Hylo (Val): Experimental (mutable value semantics), too immature
- Vale: Experimental (regional memory management), seems stalled
- V: Good candidate, but mixed reviews on its claims vs. reality
- Austral: Experimental (linear types), tiny community, not much progress
- Jakt: Built for Serenity OS, not sure if it's getting enough attention
- Jai: Focused on game programming, good reviews, but currently closed source (beta)
- Mojo: (Python-like) Seems very interesting and I'd give it a try, but too focused on GPU/parallel programming; also too early for adoption
Oh maybe less preprocessing black magics :/
As for other stuff, I was working on a LLVM extension a while back to put a lot of Python-isms into C, like the array slicing, print statement, f strings, and decorators.
Generally though, I found that writing your own macros is actually easy, and you just have to type stuff out a bit more. Last project I worked on in C I had an include file that had all the macros and the defined functions to go with the macros. The print statement honestly takes up most of the file, cause its a combination of vararg in the macro with _Generic selector with functions defined on how to print each type of variable. But the rest of the stuff like array slicing is just a macro that calls a function that takes the slice arguments as a string. Doesn't look as nice as actual array slicing, but it works and easy to write.
If you want the simplicity of C with more safety, maybe tooling like Frama-C, a MISRA C conformance checker, or just aggressive use of static and dynamic analysis tools like ASAN and UBSAN. You can also disable certain optimizations (e.g., strict aliasing) to steer away from some of the major pitfalls of UB.
and the related discussion https://lobste.rs/s/c3dbkh
Serious total MSLs that have a defined memory model to allow "low level" operations seem to be scarce if not any. Hence I do not think there is any single one that comes between C/Zig and Rust.
I would have said https://www.hylo-lang.org/ but, personal opinion, seems like there is too much going on as well. See also https://vale.dev/
P.S Mind mentioning what FP-like features are in Rust? (Genuinely have not looked into Rust that much but I am interested).
Then you have FreePascal, as FOSS ObjectPascal dialect.
On Apple's turf, Swift naturally, given its bindings for all Metal anything frameworks.
https://github.com/nbrk/m2-raylib
Naturally given the way it has been largely ignored in the last 30 years, the choice is limited, yet GCC developers considered it had a community big enough to integrate GNU Modula-2 as standard frontend on GCC 14.
I don't remember specifics, but there are some odd footguns to look out for.
I am not sure however I like the verbosity of it where if you are using 'raw' editors without snippet support, it becomes a chore very soon.
All in all it is nice to use and play around with.
ASP[0] was old way to do character animations for unix finger command .plan.
Recall HN post using python to do the same (handled animation independent of serial terminal speed).
Related HN post "John Carmack's .plan file" [3]
hascii mation [1]; Use standard graphics techniques and run through [2] or .plan with unscii[4]
"sheltered code" via mindcraft : https://youtu.be/7sNge0Ywz-M
------
ascii animation tutorial : https://www.youtube.com/watch?v=o5v-NS9o4yc
[0] :Ah, ASP link has been merged with /dev/null. :(
ASP and related things for animated .plan : https://superuser.com/questions/253308/scrolling-plan-file
[1] : https://github.com/octobanana/asciimation[2] : https://www.geeksforgeeks.org/converting-image-ascii-image-p...
[3] : "John Carmack's .plan file" : https://news.ycombinator.com/item?id=3367230\
[4] : unscii : http://viznut.fi/unscii/
That's quite the oversimplification...
C# is not a systems-programming language: it's an application-programming language that is heavily dependent on the CLR runtime environment. While those research-kernels certainly do bring a-kind-of-CLR into the kernel it's far from being like the CLR in .NET; but they don't use C# - at least, not the same C# you use in Visual Studio: those research kernels: Singularity, Midori and Verve - used not only the Sing# and Spec# extensions to C#, they had their own compiler (Bartok) which itself enabled other language-extensions.
That said, those extensions are fascinating reads:
Sing# concerns message-passing ( https://www.microsoft.com/en-us/research/wp-content/uploads/... )
Spec# added Ada-style (i.e. compiler-enforced) invariants and pre/post-conditions (this was the basis for the Code Contracts feature which was annoyingly/tragically killed-off during the .NET Core reboot in 2016); see https://www.microsoft.com/en-us/research/project/spec/
Bartok: https://www.microsoft.com/en-us/research/wp-content/uploads/...
Incorrect, the main restriction are the targets supported by CoreCLR/NativeAOT/Mono(to an extent).
Or, at least, if you pick all memory-safe languages with GC, C# offers the most when it comes to low-level access.