> why, or, what is meant by More errors caught at compile time means an agent can quickly check their work statically without unit and other tests.
Thus this desparate "AI native" marketing is probably necessary to even be considered relevant in an "agentic" world. Whether it's enough, only time will tell.
So, agents tend to do better the more feedback they can get. Type checking is pretty good for catching a bunch of dumb mistakes automatically.
The point is more hints for the agent is more better most of the time.
Python+ruff+pycheck and TypeScript are compiled to bytecode instead of machine code. They’re not statically typed in the Rust sense. And yet, I’ve watched model crank out good, valid in both of those without needing to be either strictly “compiled” or “statically typed”. Turns out AI couldn’t care less about those properties as long as you have good tooling to quickly check the code and iterate.
yes, except it's more ... on the same lines, just to hammer the point home:
it's web 2, it's SaaS, it's the latest weekly, er, sorry, daily, hottest JS framework, its the latest rap / punk / hippie / dreadlock / crewcut / swami / grunge/ guru hairstyle, it's agile, it's functional programming, it's OOP, it's OOAD, it's UML, its the Unix philosophy, its Booch notation, it's CASE tools, ... going back even further, it's structured programming, it's high-level languages, it's assemblers, its veganism, it's the keto diet, it's the Atkins diet, it's the paleo diet, it's cholesterol is bad, no, it's good, etc etc etc.
> only to jump on the next bandwagon next week, month
good for marketing as well; there are a no shortage of juniors who are mesmerized by the new shinyRegarding compilation and static typing, it's extremely helpful to be able to detect issues at compile time when doing agentic programming. That way, you don't run into as many problems at runtime, which of course the agent has more difficulty addressing. Unit tests can help bridge the gap somewhat but not entirely.
What's not stated on their website is that Mojo is likely a bad choice for agentic programming simply because there isn't much Mojo training data yet.
But yea, to write mojo 1.0 code even after getting errors might take a new training round, so next or even next-next models.