It's a bit parallel to that thing we had in 2023 where dinguses went into every thread and proudly announced what ChatGPT had to say about the subject. Consensus eventually become that this was annoying and unhelpful.
That is what Show HN has become. Nobody cares what code Claude shat in response to a random person prompt. If I cared, I would be prompting Claude myself.
Let's see, how to say this less inflamatory..
(just did this) I sit here in a hotel and I wondered if I could do some fancy video processing on the video feed from my laptop to turn it into a wildlife cam to capture the birds who keep flying by.
I ask Codex to whip something up. I iterate a few times, I ask why processing is slow, it suggests a DNN. I tell it to go ahead and add GPU support while its at it.
In a short period of time, I have an app that is processing video, doing all of the detection, applying the correct models, and works.
It's impressive _to me_ but it's not lost on me that all of the hard parts were done by someone else. Someone wrote the video library, someone wrote the easy python video parsers, someone trained and supplied the neural networks, someone did the hard work of writing a CUDA/GPU support library that 'just works'.
I get to slap this all together.
In some ways, that's the essence of software engineering. Building on the infinite layers of abstractions built by others.
In other ways, it doesn't feel earned. It feels hollow in some way and demoing or sharing that code feels equally hollow. "Look at this thing that I had AI copy-paste together!"
And for something that is, as you said, impressive to you, that's fine! But the spirit of Show HN is that there was some friction involved, some learning process that you went through, that resulted in the GitHub link at the top.
I saw this come out because my boss linked it as a faster chart lib. It is ai slop but people loved it. [https://news.ycombinator.com/item?id=46706528]
I knew i could do better so i made a version that is about 15kb and solves a fundamental issue with web gl context limits while being significantly faster.
AI helped do alot of code esp around the compute shaders. However, i had the idea of how to solve the context limits. I also pushed past several perf bottlenecks that were from my fundamental lack of webgpu knowledge and in the process deepened my understanding of it. Pushing the bundle size down also stretched my understanding of js build ecosystems and why web workers still are not more common (special bundler setting for workers breaks often)
Btw my version is on npm/github as chartai. You tell me if that is ai slop. I dont think it is but i could be wrong
In the past, new modders would often contribute to existing mods to get their feet wet and quite often they'd turn into maintainers when the original authors burnt out.
But vibe coders never do this. They basically unilaterally just take existing mods' source code, feed this into their LLM of choice and generate a derivative work. They don't contribute back anything, because they don't even try to understand what they are doing.
Their ideas might be novel, but they don't contribute in any way to the common good in terms of capabilities or infrastructure. It's becoming nigh impossible to police this, and I fear the endgame is a sea of AI generated slop which will inevitably implode once the truly innovative stuff dies and and people who actually do the work stop doing so.
AI agent coding has introduced to writing software a sort of interaction like what brands have been doing to social media.
In which case, I kinda disagree. Substandard work is typically submitted by people who don't "get it" and thus either don't understand the standard for work or don't care about meeting it. Either way, any future submission is highly likely to fail the standard again and waste evaluation time.
Of course, there's typically a long tail of people who submit one work to a collection and don't even bother to stick around long enough to see how the community reacts to that work. But those people, almost definitionally, aren't going to complain about being "gatekept" when the work is rejected.
There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.
It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.
Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.
I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.
Honestly: there is SO much media, certainly for entertainment. I may just pretend nothing after 2022 exists.
Wait, what? That's a great benefit?
Let’s be honest, this was always the case. The difference now is that nobody cares about the implementation, as all side projects are assumed to be vibecoded.
So when execution is becoming easier, it’s the ideas that matter more…
It used to be that getting to that point required a lot of effort. So, in producing something large, there were quality indicators, and you could calibrate your expectations based on this.
Nowadays, you can get the large thing done - meanwhile the internal codebase is a mess and held together with AI duct-tape.
In the past, this codebase wouldn't scale, the devs would quit, the project would stall, and most of the time the things written poorly would die off. Not every time, but most of the time -- or at least until someone wrote the thing better/faster/more efficiently.
How can you differentiate between 10 identical products, 9 of which were vibecoded, and 1 of which wasn't. The one which wasn't might actually recover your backups when it fails. The other 9, whoops, never tested that codepath. Customers won't know until the edge cases happen.
It's the app store affect but magnified and applied to everything. Search for a product, find 200 near-identical apps, all somehow "official" -- 90% of which are scams or low-effort trash.