I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.
This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.
LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.
The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.
Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.
Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.
> It's annoying when a distracting and unessential detail derails this conversation
there is no such details.
The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).
> No one argues that we should throw away type checking,…
That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.
As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.
Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.
It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.
The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.
Quality is a different issue, sure.
I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.
For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).