upvote
Does it have to be? The etymology of the word „abstraction“ is „to draw away“. I think it‘s relevant to consider just how far away you want to go.

If I‘m purely focused on the general outcome as written in a requirement or specification document, I‘d consider everything below that as „abstracted away“.

For example, this weekend I built my own MCP server for some services I‘m hosting on my personal server (*arr, Jellyfin, …) to be integrated with claude.ai. I‘ve written down all the things I want it to do, the environment it has to work in and let Claude go.

Not once have I looked at the code. And quite frankly, I don‘t care. As long as it fulfills my general requirements, it can write Python one time and TypeScript the other time should I choose to regenerate from that document. It might behave slightly differently but that is ok to a degree.

From my perspective, that is an abstraction. Deterministic? No, but it also doesn‘t have to be.

reply
The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code – assuming the AI agent is capable of producing human-quality code or better?

I agree it's not a layer of abstraction in the traditional sense though. AI isn't an abstraction of existing code, it's a new way to produce code. It's an "abstraction layer" in the same way an IDE is is an abstraction layer.

reply
> The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code

Actually yes, because Humans can be held accountable for the code they produce

Holding humans accountable for code that LLMs produce would be entirely unreasonable

And no, shifting the full burden of responsibility to the human reviewing the LLM output is not reasonable either

Edit: I'm of the opinion that businesses are going to start trying to use LLMs as accountability sinks. It's no different than the driver who blames Google Maps when they drive into a river following its directions. Humans love to blame their tools.

reply
> Holding humans accountable for code that LLMs produce would be entirely unreasonable

Why? LLMs have no will nor agency of their own, they can only generate code when triggered. This means that either nature triggered them, or people did. So there isn't a need to shift burdens around, it's already on the user, or, depending on the case, whoever forced such user to use LLMs.

reply
Human coders and IDEs are not purported to be abstraction layers.
reply
It can loop and probabilistically converge to a set of standards verified against a standard set of eval inputs
reply