Furthermore, it's actually kind of annoying that the LLMs are not better than us, and still benefit from having code properly typed, well-architected, and split into modules/files. I was lamenting this fact the other day; the only reason we moved away from Assembly and BASIC, using GOTOs in a single huge file was because us humans needed the organization to help us maintain context. Turns out, because of how they're trained, so do the LLMs.
So TypeScript types and tests actually do help a lot, simply because they're deterministic guardrails that the LLM can use to check its work and be steered to producing code that actually works.
I think codebases that are strongly typed sometimes have bad habits that "you can get away with" because of the typing and feedback loops, the LLM has learned this.
LLMs are not really good at this. The idea that LLMs benefit from TypeScript is a case of people anthropomorphizing AI.
The kinds of mistakes AI makes are very different. It's WAY better than humans at copying stuff verbatim accurately and nailing the 'form' of the logic. What it struggles with is 'substance' because it doesn't have a complete worldview so it doesn't fully understand what we mean or what we want.
LLMs struggle more with requirements engineering and architecture since architecture ties into anticipating requirements changes.
I can't say if it works better with other languages, but I can definitely say both Opus and Codex work really well with Elixir. I work on a fairly large application and they consistently produce well structured working code, and are able to review existing code to find issues that are very easy to miss.
The LLM needs guidance around general patterns, e.g. "Let's use a state machine to implement this functionality" but it writes code that uses language idioms, leverages immutability and concurrency, and generally speaking it's much better than any first pass that I would manually do.
I have my ethical concerns, but it would be foolish of me to state that it works poorly - if anything it makes me question my own abilities and focus in comparison (which is a whole different topic).
Not my experience at all. The most important factor is simplicity and clarity. If an LLM can find the pattern, it can replicate that pattern.
Language matters to the extent it encourages/forces clear patterns. Language with more examples, shorter tokens, popularity, etc - doesn't matter at all if the codebase is a mess.
Functional languages like Elixir make it very easy to build highly structured applications. Each fn takes in a thing and returns another. Side effects? What side effects? LLMs can follow this function composition pattern all day long. There's less complexity, objectively.
But take languages that are less disciplined. Throw in arbitrary side effects and hidden control flow and mutable state ... the LLM will fail to find an obviously correct pattern and guess wildly. In practice, this makes logical bugs much more likely. Millions of examples don't help if your codebase is a swamp. And languages without said discipline often end up in a swamp.
No. I would argue that popularity per se is irrelevant: if there are a billion examples of crap code, the LLMs learn crap code. conversely know only 250 documents can poison an LLM independent if model size. [Cite anthropic paper here].
The most important thing is conserve context. Succinctness is not really what you want because most context is burned on thinking and tool calls (I think) and not codegen.
Here is what I think is not important: strong typing, it requires a tool call anyways to fetch the type.
Here is what I think is important:
- fewer footguns - great testing (and great testing examples) - strong language conventions (local indicators for types, argument order conventions, etc) - no weird shit like __init__.py that could do literally anything invisible to the standard code flow