An LLM is not coupled to anything and can generate output that simply does not relate to the input. This doesn’t happen with compilers, and if it does, then it’s a specific bug to be addressed. An LLM can never guarantee certain output based on the input.
If I write x < 100, I know exactly how the compiler will treat that code every single time, and I know what < means and how it differs from <=
If I tell an LLM that “I want numbers up to 100.” Will that give me < or <= and will it be consistent every single time, even the ten thousandth program that I write?
The language is ambiguous where the code is specific
I have a co-worker in another team that write java endpoins we consume. I can tell him what I need and I trust the output. I don't need to know java to trust him, it doesn't mean I don't want to learn.
There are thousand examples like this across every stack and abstraction level. From ssh-handshakes to gps.
Sure my co-worker is fundamentally different from a compiler which is fundamentally different from an LLM.
My argument is that the chain-of-trust where you offload knowledge to an external source is identical. We do it all the time but somehow doing it with an LLM means we no longer want to learn?
Multibillion dollars companies are now the gateway for every line of code you need to write. That’s dystopian. It sucks