I've been watching non-developers vibe code stuff, and the general failure mode seems to be ignorance of 3-pick-2 tradeoffs.
They'll spam "make it more reliable" or some such, and AI will best-effort add more intermediary redis caches or similar patterns.
But because the vibe coders don't actually know what a redis cache is or how it works, they'll never make the architectural trade-offs to truly fix things.
I often wonder if it’s the statistical nature of the LLM mixed with a request in the prompt.