upvote
Agreed on your take on the parent, although I have to say I feel that AI has had the opposite effect for me. It has only accelerated learning quite significantly. In fact not only is learning more effective/efficient, I have more time for it because I am not spending nearly as much time tracking down stupid issues.
reply
Big difference between gaining knowledge and building/maintaining cognitive skills.
reply
I assure you, working with LLMs is intellectually challenging, and becomes more so as the technology matures.
reply
It really really really depends on how you are using it and what you are using it for.

I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.

reply
I would consider this a benefit. I've been a professional for 10 years and have successfully avoided CSS for all of it. Now I can do even more things and still successfully avoid it.
reply
I find it a lot more useful to dive into bugs involving multiple layers and versions of 3rd party dependencies. Deep issues where when I see the answer I completely understand what it did to find it and what the problem was (so in essence I wouldn't of learned anything diving deep into the issue), but it was able to do so in a much more efficient fashion than me referencing code across multiple commits on github, docs, etc...

This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.

LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.

reply
Indeed, many if not most bugs are intellectually dull. They're just lodged within a layered morass of cruft and require a lot of effort to unearth. It is rarely intellectually stimulating, and when it is as a matter of methodology, it is often uninteresting as a matter of acquired knowledge.

The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.

Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.

Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.

reply
I agree with your definition of programming (and I’ve been saying the same thing here), but

> It's annoying when a distracting and unessential detail derails this conversation

there is no such details.

The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).

> No one argues that we should throw away type checking,…

That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.

As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.

reply
Yes but that’s why you ask it to teach you what it just did. And then you fact-check with external resources on the side. That’s how learning works.
reply
> Yes but that’s why you ask it to teach you what it just did.

Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.

reply
But were you trying to learn CSS in the first place?
reply
This exactly. My css designs have noticeably gotten better without me,the writer getting any better at all.
reply
This isn’t necessarily a bad thing. I know a little css and have zero desire or motivation to know more; the things I’d like done that need css just wouldn’t have been done without LLMs.
reply
/s/intellectually/emotionally
reply
I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.

It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.

The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.

reply
My experience is mostly the opposite. Provided the right context and prompt, CC will generally produce code, even in domains I know, 10-20x faster.

Quality is a different issue, sure.

reply
> I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.

I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.

For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).

reply
It is better than doomscrolling on Instagram for hours like the new generations. At least the brain is active, creating ideas or reading some text nonstop to keep itself active.
reply
Sounds like you're talking about research AI and not generative AI. You can't learn artistic/creative techniques when you're not practicing those techniques. You can have a vision, but the AI will execute that vision, and you only get the end result without learning the techniques used to execute it.
reply
That's a really useful distinction to have explicitly articulated. It's also why plan mode feels like a super power. Research vs Generative AI are different: I'm going to use this.
reply
I guess I was more referring to just using generative AI when learning new subjects and exploring new ideas. It's a really efficient tutor and/or sidekick who can either explain topics in more depth, find better sources, or help me explore new theories. I was thinking beyond just generating code, which is incredibly useful but only mildly interesting.
reply
Well, the research is sometimes 10x quicker with AI assistant. But not always. Building phase is maybe 20-100% quicker for me at least, depending on the complexity of the project. Green field without 15 years of legacy that is never allowed to break is many times faster, always has been.
reply
Okay, this is a pet peeve of mine, so forgive me if I come off a little curt here, but-- I disagree strongly with how this was phrased.

"Generative AI" isn't just an adjective applied to a noun, it's a specific marketing term that's used as the collective category for language models and image/video model -- things which "generate" content.

What I assume you mean is "I think <term> is misleading, and would prefer to make a distinction".

But how you actually phrased it reads as "<term> doesn't mean <accepted definition of the term>, but rather <definition I made up which contains only the subset of the original definition I dislike>. What you mean is <term made up on the spot to distinguish the 'good' subset of the accepted definition>"

I see this all the time in politics, and it muddies the discussion so much because you can't have a coherent conversation. (And AI is very much a political topic these days.) It's the illusion of nuance -- which actually just serves as an excuse to avoid engaging with the nuance that actually exists in the real category. (Research AI is generative AI; they are not cleanly separable categories which you can define without artificial/external distinctions.)

reply
> I have more time for it because I am not spending nearly as much time tracking down stupid issues.

It is a truism that the majority of effort and time a software dev spends is allocated toward boilerplate, plumbing, and other tedious and intellectually uninteresting drudgery. LLMs can alleviate much of that, and if used wisely, function as a tool for aiding the understanding of principles, which is ultimately what knowledge concerns, and not absorbing the mind in ephemeral and essentially arbitrary fluff. In fact, the occupation hazard is that you'll become so absorbed in some bit of minutia, you'll forget the context you were operating in. You'll forget what the point of it all was.

Life is short. While knowing how to calculate mentally and/or with pen and paper is good for mastering principles and basic facility (the same is true of programming, btw), no one is clamoring to go back to the days before the calculator. There's a reason physicists would outsource the numerical bullshit to teams of human computers.

reply
Just wanted to say you put it really well, that's exactly how I feel.
reply
Are you sure that is not the illusion of learning? If you don't know the domains, how can you know how much you now know? Especially consider that these models are all Dunning Kruger-inducing machines.
reply
Agree on that too. And I use these as tools. I don't think I'm missing out on anything if I use this drill press to put a hole through an inch of steel instead of trying to spend a day doing it wobbly with a hand-drill.
reply
No. It says much more than that, because it applies to many other tools that aren't AI.
reply
Well, you know what they say about our current attention spans. If it's not a slogan it's already too long!
reply
"Verbose" is the wrong adjective. Yours is a terse projection into a lower space, valid in itself, but lacking the power and precision of its archetype.
reply
What if you’re a musician and use design as part of your marketing? Why should a musician deep dive design when they really only care about music?
reply
The argument is not that only designers can design, nor that everyone should design like a designer. It’s to not confuse shopping for or generating generic solutions with the activity of problem solving. Per Alexander, trivial problems, those that can be solved without balancing interactions between conflicting requirements, are not design problems. So, don’t worry and just pick what you need and like!
reply
Presumably you care about the quality of your marketing. Otherwise why do it at all. Worst case scenario, your marketing turns people off to your music, who would have otherwise been listeners.

Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound.

So you should be really interested in how to capture the “vibe” of your music in a visual medium.

But if you don’t care at all whether ppl actually listen to your music, then yeah you don’t have to deep dive.

reply
"Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound."

The term you are looking for is 'aesthetic'.

And indeed.. music is far more than just a sound or whatever simple thing one tries to boil it down to.

Im convinced many (especially here) really dislike that - they want it just be a case of typing in a few things in an LLM and bam... there you go. They have zero clue about the nature of the economy, what's really going on in various markets etc etc.

reply
just use clipart & templates and move on then, taking your argument to the extreme and skip the AI tooling.
reply
> using generative AI has a detrimental effect on the user because one deprives themselves of the learning experience

Or it lets folks focus. My coding skills have gotten damn rough over the years. But I still like the math. Using AI to build visualizations while I work on the model math with paper and pen is the best of both worlds. I can rapidly model something I’m working on out algebraically and analytically.

Does that mean my R skills are deteriorating? Absolutely. But I think that’s fine. My total skillset’s power is increasing.

reply
I think that the beauty of the human experience is that all you need to learn is to practice. You automatically improve at what you're doing. The kinds of skills that atrophy when you use AI are skills that AI can already automate. And nobody is going to pay you to do slowly what a machine can do quickly/cheaply.

When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.

reply
Was thinking similarly... Without the friction, you're unable to explore the space, the space doesn't even exist at all... So it's not even clear where you're going from or where you'll arrive at.
reply
I think the larger part implied is the design will be crappy, because the problem was unexplored
reply
your paragraph is parent's point in action.

If only all great works could just be an X post!

reply
And, anyone who reads your comment will be deprived of the experience of learning why your comment makes so much sense.
reply
Not really. It’s saying that most people in tech have no fucking idea what designers do, but somehow feel qualified to evaluate their output, and think tools that make things that look nice are designing things. What you reference is one effect of what the comment is about. Another effect is developers, combining this with engineer’s disease, being incredibly irritating to work with because they constantly make reductive comments that completely miss the point while other developers nod and say “yeah that sounds right.” I was a developer for ten years— I’ve seen this from both sides.
reply
deleted
reply