upvote
Jonathan Blow wrote his own game enginee and for that he wrote his own programming language.

He went through straight recursive descendant parser and said same thing.

I think compiler courses teach from yacc, bison etc that's where this whole thing came from but in practice people discovered that hand written recursive descendant parsers are all you need.

reply
> I think compiler courses teach from yacc, bison etc that's where this whole thing came from

Very true. I have a shelf full of books on compiler development and optimization. I have read them selectively, a chapter here, a chapter there. But that shelf is useless for a vast majority of people.

You might find it useful if you are developing a production-level compiler/vm (I cannot make this statement with a straight face while Python rules the world). But a simple and sensible architecture that uses recursive-descent parsing takes you a long way.

Most hobbyist compilers (and even some production ones) are written as a heavy front-end compiling down to C or LLVM. Very few people actually write their own backend.

reply
> You might find it useful if you are developing a production-level compiler/vm

Not any of the ones I have worked on, nor the ones I know about: they all use hand-written parsers. In practice, error reporting and recovery tends to be tedious and/or difficult with a generated parser, which is a serious issue for practical tools.

Parsing has turned out to be simpler, in practice, than the computing pioneers expected it to be, because simpler grammars are easier for both machines and humans to reason about. Instead of using sophisticated parser generators, we just design dumb grammars: that works out better all around.

reply
Yeah. I added the caveat because I haven't looked at the source of the major production compilers and didn't want to overreach. The hobbyist ones mostly stick to hand-rolled recursive descent.
reply
Re: bison and yacc. It came from the dragon book which for forever was the way to learn to write languages.
reply
Yep. I started out using ANTLR for one project of mine. I ended up spending loads of time fighting its syntax to do really quite simple things, and it was slow! I probably wasn't holding it right. In the end, I wrote a simple lexer and recursive descent parser (with a small amount of lookahead) in a weekend. The code was easy to read, easy to extend, and fast.
reply
Probably the most fun I’ve had with LLMs has been slowly making a programming language as a side project.

I used to give up somewhere around the type system, too, but this time I’m approaching something vaguely useful. It even has a basic LSP.

It’s been both enjoyable and enlightening, and LLMs turn out to be an excellent pair designer as (in addition to implementation) they’re really good at summarising the impact of various decisions.

> the reason you will fail for sure, is the inability to restrict the scope of the project

This will be the reason, for sure. But then the scope of every project like this tends towards building an OS with it then replacing every piece of software, including all embedded devices :)

reply
> slowly

I cannot do slow. It is either burn the candle at both ends, or do nothing at all.

I am using LLMs this time as well, but I spent close to 400 hours over a period of 6-7 weeks on my project before I put it to the side temporarily (got bored once the thinking part was done). About 300 of those were spent on iterating over the language and VM specs and eliminating all ambiguities and needless features. The remaining 100 were used to produce the code --- the VM, the assembler and the compiler --- and to repeated rewrite it to conform to my way of doing things.

LLMs have let me become extremely choosy about which code I am willing to keep.

reply
I've taken the approach of writing and even directly reviewing almost no code for this, otherwise I'd simply not have time for it as another side project. It's also interesting to see how far I can push this "vibe engineering" approach, and although it's not perfect, the answer is much further than I'd have expected going in.

I've managed to get OpenCode setup such that I can have a productive discussion about the design or an issue / change then leave the LLM iterating for long periods while I do other work. It's instructed to maintain test coverage and treat quality very seriously - as a result there are over 5000 tests (some I suspect are useless...) and it's pretty rare to get a regression.

I'm pretty sure there are plenty of significant bugs and gaps, but also that once found it seems like all of them will be fixed pretty quickly by the LLM.

I just have to avoid looking at the code...

reply
Many projects wish they had a proper grammar. When a project turns useful and people want to port it, or support it on other platforms, a grammar makes that job much easier.

I am not quite sure what you mean by having a recursive descent parser, because you can write one manually, or you can generate one from a grammar, which would have the additional benefits of having a grammar. I recommend having a grammar.

reply
I like writing parsers, and nowadays just use handwritten recursive descent functions, using a couple of simple utility functions. It is easy to reason about and flexible. I do start each parsing function with a comment stating the informal grammar the function should parse (and LLM autocomplete usually types the rest of the function).

With regard to portability: I've found cross-language parser generators especially unpleasant to work with. Instead, I just implement the parser in a language that runs on all platforms I care about.

reply
I learned to do this about 2 years ago (pre LLM). I have been developing software for ~30 years and somehow doing something like this was a major mental obstacle, mostly created by the perception of "the dragon book", as in this topic being full of mystical unobtainable incantations, so I never even dared venture into this space. Silly, I know. However, after diving into this and learning to write a recursive descent parser for a DSL I wanted to write, it felt like I'd acquired a superpower. Totally understand that there is many more layers to all of this, layer that can get very complex, but just learning that first bit...
reply
I wish people would start with Nystrom's https://www.craftinginterpreters.com/ and avoid the dragon etc unless they really, really need it. Almost everything I have learnt about compiler/vm development, I have done so by reading random blogs and articles on various aspects and small tutorials on writing parsers and vms.

Even stuff like Crenshaw's Let's Build a Compiler was more useful to me than all these books that do lexical analysis using regular expressions. I have written lexers and parsers hundreds of times for all kinds of DSLs and config languages and not once have I used regular expressions to scan the text.

reply
Isn't using regex in this space kinda shunned, when you can easily write a grammar and parse things more reliably that way? Surprised to read that any books do that.
reply
Every single book starts with regexes and DFA/NFA for lexical analysis. Too much ceremony for something you can write in 30 minutes and 300 lines
reply
I agree. I have written lexer/parser for my language twice (for compiler0 and for a self-hosted compiler). It's a very dumb task requiring almost to mental load.

Profiling results show that the amount of time spent lexing/parsing is negligible - less than 1% of the total compilation time.

reply
I wrote a few of these due to an interest in compilers and hardware.

The easiest syntax to copy if you’re looking for a high level language is Smalltalk.

But most of the time, I wouldn’t even use that. Simple imperative languages that look like BASIC works pretty well in most domains. If you simplify the syntax a little, it’s very easy to understand the compiler and use it for say when you want users to input code into existing systems.

reply
I have written compilers for two families over the years: C and ML. My current preference is Python. I am currently working on a statically typed language that is inspired by Python (minus objects and OOP) that runs on a register VM.

Syntax is a minor issue but something that people are very opinionated about. You could technically build multiple front ends that share the typechecking, CFG validation, optimization, register allocation and byte code emission phases. But it is too much work for what is presently a personal project.

reply
Are they public? Can we study from them? Got later into compilers and I'm trying a little bit of everything
reply
There are many open source compiler and interpreter projects on github.

also:

https://github.com/BaseMax/AwesomeInterpreter

and probably there is one for compilers too.

reply