I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.
This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.
LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.
The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.
Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.
Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.
> It's annoying when a distracting and unessential detail derails this conversation
there is no such details.
The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).
> No one argues that we should throw away type checking,…
That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.
As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.
Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.
It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.
The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.
Quality is a different issue, sure.
I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.
For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).
"Generative AI" isn't just an adjective applied to a noun, it's a specific marketing term that's used as the collective category for language models and image/video model -- things which "generate" content.
What I assume you mean is "I think <term> is misleading, and would prefer to make a distinction".
But how you actually phrased it reads as "<term> doesn't mean <accepted definition of the term>, but rather <definition I made up which contains only the subset of the original definition I dislike>. What you mean is <term made up on the spot to distinguish the 'good' subset of the accepted definition>"
I see this all the time in politics, and it muddies the discussion so much because you can't have a coherent conversation. (And AI is very much a political topic these days.) It's the illusion of nuance -- which actually just serves as an excuse to avoid engaging with the nuance that actually exists in the real category. (Research AI is generative AI; they are not cleanly separable categories which you can define without artificial/external distinctions.)
It is a truism that the majority of effort and time a software dev spends is allocated toward boilerplate, plumbing, and other tedious and intellectually uninteresting drudgery. LLMs can alleviate much of that, and if used wisely, function as a tool for aiding the understanding of principles, which is ultimately what knowledge concerns, and not absorbing the mind in ephemeral and essentially arbitrary fluff. In fact, the occupation hazard is that you'll become so absorbed in some bit of minutia, you'll forget the context you were operating in. You'll forget what the point of it all was.
Life is short. While knowing how to calculate mentally and/or with pen and paper is good for mastering principles and basic facility (the same is true of programming, btw), no one is clamoring to go back to the days before the calculator. There's a reason physicists would outsource the numerical bullshit to teams of human computers.
Or it lets folks focus. My coding skills have gotten damn rough over the years. But I still like the math. Using AI to build visualizations while I work on the model math with paper and pen is the best of both worlds. I can rapidly model something I’m working on out algebraically and analytically.
Does that mean my R skills are deteriorating? Absolutely. But I think that’s fine. My total skillset’s power is increasing.
Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound.
So you should be really interested in how to capture the “vibe” of your music in a visual medium.
But if you don’t care at all whether ppl actually listen to your music, then yeah you don’t have to deep dive.
The term you are looking for is 'aesthetic'.
And indeed.. music is far more than just a sound or whatever simple thing one tries to boil it down to.
Im convinced many (especially here) really dislike that - they want it just be a case of typing in a few things in an LLM and bam... there you go. They have zero clue about the nature of the economy, what's really going on in various markets etc etc.
When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.
If only all great works could just be an X post!
What if you don’t give a shit about design and it’s a means to an end for a project that involves something different that you do care about?
For example, I think design, as they mean it, could be described as "how to get that thing we care about". The correct amount of design depends on how exacting the outcome and outputs needs to be across different dimensions (how fast, how accurate, how easy to interpret, how easy to utilize as an input for some other system). For generalized things where there's not exacting standards for that, AI works well. For systems with exacting standards along one or more of those aspects, the process of design allows for the needed control and accuracy as the person or people doing the work are in a constant feedback loop and can dial in to what's needed. If you give up control of the inside of that loop, you lose the fine grained control required for even knowing how far you are away from theoretical maximums for those aspects.
Thank you for so succinctly demonstrating the problem with using AI for everything. You used to have to either care enough to do the design yourself or find someone who cared and specialized in that to do it for you. Now you quickly and cheaply fill in the parts you don't personally care about with sawdust, and as this becomes normalized you deprive yourself and others from discovering that they care about the design part. You'll ship your thing now, and it'll be fine. The damage is delayed and externalized.
I won't advocate against use of new technology to make yourself more productive, but it's important to at least understand what you're losing.
Or worse, you gave up because you did not have the time to learn the skill or the money to hire somebody. In this case, your dream just died.
If Grok didn't create the fake nudes users were dreaming about but couldn't create with Photoshop,
would my headstone crumble down?
As "intel" dashboards stay a dream,
the Hollywood wind's a howl
As photos are just still
The Kremlin's falling
As Einstein is not wrong
Radio 4 is static
You think most UI/UX designers, or the artists creating slop for content marketing spam factories for the past decades, cared? Some, maybe. Most probably had higher ambitions, but are doing what actually pays their bills.
It's similar to software developers. Most of those being paid to code couldn't care less, they're in there for the fat paycheck; everyone else mostly complains the work is boring or dumb (or worse), but once you have those skills, it makes no economic sense to switch careers (unless, of course, you're into management, or into playing the entrepreneurship roulette).
The paychecks weren’t great. Everyone was offering to pay designers with “exposure”. If they didn’t innately care about the field they would have done something more lucrative.
the parent's point is that it doesn't work that way. The point is self reinforcing. Design is not a thing. it's the earned scars from the process. Fine to disagree but it reinforces the point.
Like, maybe I just want to make an interface to configure my homemade espresso dohickey, do I have to wear a turtleneck and read Christopher Alexander now? I just wanted a couple buttons and some sliders.
We don't all have to be experts in everything, some people just need a means to an end, and that's ok. I won't like the wave of slop that's coming, but the antidote certainly isn't this.
It's true that design theory writing is annoyingly verbose and intangible, but that doesn't make it wrong. Give someone a concrete language spec and they will not really know how it feels to use the language, and even once they do experience its use they will not be able to explain that feeling using the language spec. Invariably the language will tend to become intangible and likely very verbose.
But to answer your question: no, it's of course perfectly serviceable to just copy the interface others have created, and if the needs aren't exactly the same you can just put up with the inevitable discomfort from where the original doesn't translate into the copy.
I'm an engineer who also loves design. I've read a lot of the books (including the one referenced), I know some concepts and terminology, and I understand the general process — but I'll never be a professional designer. My knowledge is limited, and I find most design tools so complex they actually get in the way of problem exploration and creativity.
For people like me, this tools removes the friction which actually prevents me from being more focused on the valuable parts of the design process. I can more easily discover and learn new concepts, and ultimately spend more time being creative and exploring the problem space.
A whiteboard or a wireframing software would be better, because it lets you focus first on the interactive part. And once that’s solved, the visual part is easier.
This speed and variation wins for me. But yes without a designers eye laziness can get lost in slop design too..
To me the value of Gen Ai is an accelerant (not slop factory) for ideation and solutions not a replacement of the human owning the process.. but laziness ususally wins
when people wax philosophical/poetical about what is essentially capital production already i'm always so perplexed - do you not realize that you're not doing art/you're not an artisan? your labor is always actively being transformed into a product sold on a market. there are no "marvelous human experiences", there is only production and consumption.
> They’ll be impoverished and confuse output with agency
ironic.
The first time I used Mac OS/X, circa 2004-2005, I was blown away by the design and how they managed to expose the power of the underlying Unix-ish kernel without making it hurt for people who didn't want that experience. My SO couldn't have cared less about Terminal.app, but loved the UI. I also loved the UI and appreciated how they took the time to integrate cli tools with it.
I would say it was a marvelous human experience _for me_.
Sure it was the Apple engineers' and designers' labor transformed into a product, but it was a fucking great product and something that I'm sure those teams were very proud of. The same was true with the the iPod and the iPhone.
I work on niche products, so I've never done something as widely appreciated as those examples, but on the products I've worked on, I can easily say that I really enjoy making things that other people want to use, even if it's just an internal tool. I also enjoy getting paid for my labor. I've found that this is often a win-win situation.
Work doesn't have to be exploitive. Products don't have to exploit their users.
Viewing everything through the lens of production and consumption is like viewing the whole world as a big constraint optimization problem: (1) you end up torturing the meaning of words to fit your preconceived ideas, and (2) by doing so you miss hearing what other people are saying.
...
> Work doesn't have to be exploitive. Products don't have to exploit their users.
bruh do people have any idea what they're writing as they write it? you're talking about "work doesn't have to be [exploitative]" in the same breath as Apple which is the third largest market cap company in the world and who's well known for exploiting child labor to produce its products. like has this comment "jumped the shark"?
> Viewing everything through the lens of production and consumption
i don't view everything through any lens - i view work through the lens of work (and therefore production/consumption). i very clearly delineated between this lens and at least one other lens (art).
Ultimately the exploitative pyramid always terminates in a peak, and the guys working up there can for sure be having a hecking great time doing their jobs.
just repeating the same mistake as op: sadness/happiness is completely outside the scope here. these are aspects of a job - "design" explicitly relates to products not art. and wondering about the sadness/happiness of a job is like wondering about the marketability of a piece of art - it's completely besides the point!
1. Good design is innovative 2. Good design makes a product useful 3. Good design is aesthetic 4. Good design makes a product understandable 5. Good design is unobtrusive 6. Good design is honest 7. Good design is long-lasting 8. Good design is thorough down to the last detail 9. Good design is environmentally friendly 10. Good design is as little design as possible
Generative AI just tries to predict based on its training data.
a product can be a piece of art and design can and does in practice often go hand in had with art, practically most designers also other than the utilitarian role practice the artistic one, wether you would want to group art within design as one is a matter of definitions
of course but that's well within the scope of the whole paradigm (as opposed to how it is originally phrased it in relation to a loss of "marvelous human experiences"): if i use a bad tool to solve my customer's problems in an unsatisfactory way then my customers will no longer be my customers (assuming the all knowing guiding hand of the free market). so there's no new observation whatsoever in OP.