I don't agree. For novel use cases, yes there's some truth to that. But consistency is huge in a UX. If basic controls work well for a situation, they should be used. Designers should not be getting "creative" or "original" for those sorts of things.
If I have a product out of my lab that makes it to human trials, there will be a full team of marketers and designers tasked to the brand image.
Ironically I think AI will replace researchers before it replaces artists.
Also, a lot of very good software developers are bad at design and unwilling or unable to pay for a designer. This will be an improvement for them.
But the mass market (who this is ultimately for) doesn't care about great design. They care about "seeing something on the screen." If they can get something that looks 80-90% aligned with what they observe to be modern design, they won't think twice (even if the end result is clunky or not on par with what a professional designer would produce). It's the Ikea Effect on steroids.
If you treat it like a black box used to outsource your own thinking, you are holding it wrong.
I think we probably need less software, but higher quality, not more. Unfortunately AI only goes in one direction…
Is there also a place in the world for not-great-but-good-enough design?
not really, great design in a web application is no surprises.
Still human?
This is entirely accurate, however I fear there's a lack of perspective:
If you're in the middle of the desert and need to sit down, that random rock looks and feels great because there's nothing even close, around!
One issue that a lot of experts fail to recognize is that "great" is relative: It's not apparent to the experts because they are only pulled in when their expertise is needed. Most of the time when experts are pulled in, requirements are clear, you have traction, scale and now you need to optimize.
Once you're spoiled for choices, you have lots of options and then that random rock doesn't look appealing at all: now you're considering other factors like budget - IKEA vs Adirondack.
What AI is making a huge difference are places where "great" isnt that valuable:
- people in the desert: Someone wants to track what words their toddler is saying or their groceries or how much kitty litter they should buy soon and Claude will spit something out reasonable even if it makes the skin of experts crawl.
- commodity and bean counters: in cut throat industries like power or insurance, it's all commodity services competing on price. Most people arent going to pay a premium for a better looking, more intuitive insurance app. It just needs to not suck and fall over. Or you're making a knockoff of an existing, well understood product
The catch is that the person making the decision might not know or care about the difference.