Carp is about 10 years old and has some cool demo's (like SDL for gamedev).
> The key features of Carp are the following:
> * Automatic and deterministic memory management (no garbage collector or VM)
> * Inferred static types for great speed and reliability
> * Ownership tracking enables a functional programming style while still using mutation of cache-friendly data structures under the hood
> * No hidden performance penalties – allocation and copying are explicit > * Straightforward integration with existing C code
> * Lisp macros, compile time scripting and a helpful REPL
- Cannot horizontally scroll the code snippets on homepage when it overflows. The scroll bars appear but swiping the snippet does nothing. - Footer links are unresponsive (loon, GitHub, MIT Licence links) - In the changelog page, scrolling makes the hamburger menu hide release dates behind it - Hamburger close chevron looks misaligned (not sure if this was a deliberate choice)
That said, I wish that part of Loon were less coupled to the allocation model though. What made you opt for mandatory manual memory management in an otherwise high-level language? And effects?
There are two things common in language design that, honestly, strike me as unnecessary:
1. manual allocation and lifetime stacking, and
2. algebraic effects.
On 1: I think we often conflate the benefits of Rust-style mutability-xor-aliased reference discipline with the benefits of using literal malloc and free. You can achieve the former without necessitating the latter, and I think it leads to a nicer language experience.
It's not just true that GC "comes with latency spikes, higher memory usage, and unpredictable pauses" in any meaningful way with modern implementations of the concept. If anything, it leads to more consistent latency (no synchronous Drop of huge trees at unpredictable times) and better memory use (because good GCs use compressed pointers and compaction).
On 2: I get non-algebraic effects for delimited continuations. But lately I've seen people using non-flow-magical effects for everything. If you need to talk to a database, pick a database interface and pass an object implementing the interface to the code that needs it. Effects do basically the same thing, but implicitly.
"Oh, it's not a global. Globals are bad. Effects are typed and blend into the function signature. Totally different and non-bad."
No. Typing the effects doesn't help: oh, sure, in Koka I can say that my function's type signature includes the "database connection" effect. Okay, that's a type. Where does the value backing that type come from? Thin air? No, the value backing an effect comes from the innermost handler, the identity of which, in a large program, is going to be hard to figure out.
Like all global variables, the sorts of "effects" currently in vogue will lead to sadness at scale. Globals don't stop being bad when we call them something else: they're still bits of ambient authority that frustrate local reasoning. It's as if everyone started smoking again but called cigarettes "mist popsicles" and claimed that they didn't cause cancer.
There's no way around writing down names for the capabilities we give a program and propagating these names from one part of the program to another. Every scheme to somehow free us from this chore is just smuggling in ambient authority by another name. Ambient authority is seductive. At small scales, it's fine. Better than fine! Beautiful. Then, one day, as your program scales and its maintainership churns, you find you have no idea who implements what.
Software engineering develops antibodies against these seductions. The problem is that the antibodies are name-based, so when we dress up old, bad ideas with new names, we have to re-learn why they're bad.
P.S. You might object, "You're talking about dynamic-extent effects. What about lexically-scoped effects systems?", you might ask. "These fix the problems with dynamic-extent effects."
Sure. Lexical effects are better. That's why every decent language already has a "lexically-scoped effect system". It's called let-over-lambda, or if you squint, an "object". We've come full circle.
That was basically my intent with this project, but I took the laziest way to get there lol
Fixing these, it runs mostly as advertised, but it seems to assume that one-letter types are always generic parameters, so it's impossible to (for example) generate this:
struct X;
enum A {
P(X),
Q
}
Trying this: (struct X)
(enum A (P X) Q)
produces this: struct X;
enum A<P, X> { Q }
while using a multi-letter type like `String`: (enum A (P String) Q)
produces the expected: enum A { P(String), Q }
One way to solve this would be to always require the generic annotation, and let it be empty when there are no generics, but when I tried that it did something weird: (struct X)
(enum A () (P X) Q)
produces: struct X;
enum A {
_ /* List([], Some(Span { start: 54, end: 56 })) */,
P(X),
Q
}
I have no idea where the `_` and the comment came from.[1] https://github.com/ThatXliner/rust-but-lisp/blob/70c51a107b2...
[2] https://github.com/ThatXliner/rust-but-lisp/blob/70c51a107b2...
> Everything Rust has … expressed as s-expressions. No semantic gap.
The full code is usually something like:
fn foo<F>(callback: F) where for<'a> F: ...
Which is a generic function foo that takes the argument of type F, where F must be...
I don't know what this is, but clearly not Lisp...
It's quite weird-looking for someone who's done any amount of lisp programming.
The first paragraph says literally that.
Can I use the amazing `rust-analyzer` LSP to get cool IDE features?
I suspect the answer is no, but these might be good further prompts to use.
Much better to give them something more M-expr styled, I think a grammar that is LL(1) is probably helpful in that regard.
Basically the more you can piggyback on the training data depth for algol-style and pythonic languages the better.
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Maybe we should one day include Golang or Rust to it
It reads as No X no Y just slop to me every time.
some pre-processor that "compiles into rust" from less awful syntax?
It's sort of, but not quite, like "El jefe"
"L rut piss"
I'm not sure I quite understand the point of your comment.
Are you implying that LLMs should be used for very hard to write code? I feel like the best use of LLMs is to automate the easy stuff so that I can focus on the hard to write stuff.
For everyone who is shaming on the project for being "LLM slop," sure but that's the reason why something like this can exist in the first place. The point isn't to be a finished, production-ready product. The point is to be an interesting work, and just a sly bit silly
Can we please write our own READMEs before posting to HN?
I don't even feel bad saying this because clearly OP is just the front for Claude here.