The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects who don't have anything interesting to say about programming.
One of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.
It used to be that ShowHN was a filter: in order to show stuff, you had to have done work. And if you did the work, you probably thought about the problem, at the very least the problem was real enough to make solving it worthwhile.
Now there's no such filter function, so projects are built whether or not they're good ideas, by people who don't know very much
It's a bit parallel to that thing we had in 2023 where dinguses went into every thread and proudly announced what ChatGPT had to say about the subject. Consensus eventually become that this was annoying and unhelpful.
Let's see, how to say this less inflamatory..
(just did this) I sit here in a hotel and I wondered if I could do some fancy video processing on the video feed from my laptop to turn it into a wildlife cam to capture the birds who keep flying by.
I ask Codex to whip something up. I iterate a few times, I ask why processing is slow, it suggests a DNN. I tell it to go ahead and add GPU support while its at it.
In a short period of time, I have an app that is processing video, doing all of the detection, applying the correct models, and works.
It's impressive _to me_ but it's not lost on me that all of the hard parts were done by someone else. Someone wrote the video library, someone wrote the easy python video parsers, someone trained and supplied the neural networks, someone did the hard work of writing a CUDA/GPU support library that 'just works'.
I get to slap this all together.
In some ways, that's the essence of software engineering. Building on the infinite layers of abstractions built by others.
In other ways, it doesn't feel earned. It feels hollow in some way and demoing or sharing that code feels equally hollow. "Look at this thing that I had AI copy-paste together!"
And for something that is, as you said, impressive to you, that's fine! But the spirit of Show HN is that there was some friction involved, some learning process that you went through, that resulted in the GitHub link at the top.
I saw this come out because my boss linked it as a faster chart lib. It is ai slop but people loved it. [https://news.ycombinator.com/item?id=46706528]
I knew i could do better so i made a version that is about 15kb and solves a fundamental issue with web gl context limits while being significantly faster.
AI helped do alot of code esp around the compute shaders. However, i had the idea of how to solve the context limits. I also pushed past several perf bottlenecks that were from my fundamental lack of webgpu knowledge and in the process deepened my understanding of it. Pushing the bundle size down also stretched my understanding of js build ecosystems and why web workers still are not more common (special bundler setting for workers breaks often)
Btw my version is on npm/github as chartai. You tell me if that is ai slop. I dont think it is but i could be wrong
In the past, new modders would often contribute to existing mods to get their feet wet and quite often they'd turn into maintainers when the original authors burnt out.
But vibe coders never do this. They basically unilaterally just take existing mods' source code, feed this into their LLM of choice and generate a derivative work. They don't contribute back anything, because they don't even try to understand what they are doing.
Their ideas might be novel, but they don't contribute in any way to the common good in terms of capabilities or infrastructure. It's becoming nigh impossible to police this, and I fear the endgame is a sea of AI generated slop which will inevitably implode once the truly innovative stuff dies and and people who actually do the work stop doing so.
AI agent coding has introduced to writing software a sort of interaction like what brands have been doing to social media.
In which case, I kinda disagree. Substandard work is typically submitted by people who don't "get it" and thus either don't understand the standard for work or don't care about meeting it. Either way, any future submission is highly likely to fail the standard again and waste evaluation time.
Of course, there's typically a long tail of people who submit one work to a collection and don't even bother to stick around long enough to see how the community reacts to that work. But those people, almost definitionally, aren't going to complain about being "gatekept" when the work is rejected.
There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.
It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.
Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.
I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.
Honestly: there is SO much media, certainly for entertainment. I may just pretend nothing after 2022 exists.
Let’s be honest, this was always the case. The difference now is that nobody cares about the implementation, as all side projects are assumed to be vibecoded.
So when execution is becoming easier, it’s the ideas that matter more…
It used to be that getting to that point required a lot of effort. So, in producing something large, there were quality indicators, and you could calibrate your expectations based on this.
Nowadays, you can get the large thing done - meanwhile the internal codebase is a mess and held together with AI duct-tape.
In the past, this codebase wouldn't scale, the devs would quit, the project would stall, and most of the time the things written poorly would die off. Not every time, but most of the time -- or at least until someone wrote the thing better/faster/more efficiently.
How can you differentiate between 10 identical products, 9 of which were vibecoded, and 1 of which wasn't. The one which wasn't might actually recover your backups when it fails. The other 9, whoops, never tested that codepath. Customers won't know until the edge cases happen.
It's the app store affect but magnified and applied to everything. Search for a product, find 200 near-identical apps, all somehow "official" -- 90% of which are scams or low-effort trash.
Wait, what? That's a great benefit?
— Tom Cargill, Bell Labs
Some day I’m going to get a crystal ball for statistics. Getting bored with a project was always a thing— after the first push, I don’t encounter like 80% of my coding side projects until I’m cleaning— but I’ll bet the abandonment rate for side projects has skyrocketed. I think a lot of what we’re seeing are projects that were easy enough to reach MVP before encountering the final 90% of coding time, which AI is a lot less useful for.
My experience is the opposite. It’s so much easier to have an LLM grind the last mile annoyances (e.g. installing and debugging compilation bullshit on a specific raspberry pi + unmaintained 3p library versions.)
I can focus on the parts I love, including writing them all by hand, and push the “this isn’t fun, I’d rather do something else” bits to a minion.
I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.
I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?
printn(n,b) {
extrn putchar;
auto a;
if(a=n/b) /* assignment, not test for equality */
printn(a, b); /* recursive */
putchar(n%b + '0');
}
You'd think we'd have a much better way of expressing the details of software, 50 years later? But here we are, still using ASCII text, separated by curly braces.Long-term, this is will do enormous damage to society and our species.
The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.
LLMs are highly susceptible to poisoning attacks. This is their "Achilles' heel". See: https://www.anthropic.com/research/small-samples-poison
We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.
There is strong, widespread support for this hostile posture toward AI. For example, see: https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...
Join us. The war has begun.
Nice. I hope you are generating realistic commits and they truly cannot distinguish poison from food.
The cost of detecting/filtering the poison is many orders of magnitude higher than the cost of generating it.
Thing is I worked manually on both of these a lot before I even touched Claude on them so I basically was able to hit my wishlist items that I don't have time to deal with these days but have the logic figured out already.
> author (pilot?) hasn't generally thought too much about the problem space
I’ve stopped saying that “AI is just a tool” to justify/defend its use precisely because of this loss of thought you highlight. I now believe the appropriate analogy is “AI is delegation”.
So talking to the vibe coder that’s used AI is like talking to a high level manager rather than the engineer for human written code
I have two projects right now on the threshold of "Show HN" that I used AI for but could have completed without AI. I'm never going to say "I did this with AI". For instance there is this HR monitor demo
https://gen5.info/demo/biofeedback/
which needs tuning up for mobile (so I can do an in-person demo to people who work on HRV) but most all being able to run with pre-recorded data so that people who don't have a BTLE HR monitor can see how cool it is.
Another thing I am tuning up for "never saw anything like this" impact is a system of tokens that I give people when I go out as-a-foxographer
https://mastodon.social/@UP8/116086491667959840
I am used to marketing funnels having 5% effectiveness and it blows my mind that at least 75% of the tokens I give out get scanned and that is with the old conventional cards that have the same back side. The number + suit tokens are particularly good as a "self-working demo" because it is easy to talk about them, when somebody flags me down because they noticed my hood I can show them a few cards that are all different and let them choose one or say "Look, you got the 9 of Bees!"
It seems silly, but I know I'm more likely to review an implementation if can learn more about the author's state of mind by their style.
As I may have noted before, humans are the problem.
I don't particularly care if people question that, but the source repo is on GitHub: they can see all the edits that were made along the way. Most LLMs wouldn't deliberately add a million spelling or grammar mistakes to fake a human being... yet.
As for knowing what I'm talking about. Many of my blog posts are about stuff that I just learned, so I have many disclaimers that the reader should take everything with a grain of salt. :-) That said: I put a ridiculous amount of time in these things to make sure it's correct. Knowing that your stuff will be out there for others to criticize if a great motivator to do your homework.
Side note: I’d think installing Anubis over your work would go a long way to signaling that but ymmv.
presumably if this is true, it should be obvious by the quality of your product. If it isnt, then maybe you need to need to rethink the value of your artisanal hand written code.
These days I do see a lot of people choosing software for the money. Notably, many of them are bootcamp graduates and arguably made a pivot later in life, as opposed to other careers (such as medicine) which get chosen early. Nothing wrong with that (for many it has a good ROI), but I don’t think this changed anything about people with technical hobbies.
When you’re young, you tend not to choose the path the rest of your life will take based on income. What your parents want for you is a different matter…
It's taken me about month; currently at ~500 commits. I've been obsessed with this problem for ~6 weeks and have made an enormous amount of progress, but admittedly I'm not an expert in the domain.
Being intentionally vague, because I don't want to tip my hand until it's ready. The problem is related to an existing open source tool in a particular scientific niche which flatly does not work on an important modern platform. My project, an open source repo, brings this important legacy tool to this modern platform and also offers a highly engaging visual demo that is of general interest, even to a layperson not interested in programming or this particular scientific niche.
I genuinely believe I have something valuable to offer to this niche scientific community, but also as a general interest and curiosity to HN for the programming aspects (I put a lot of thought into the architecture) as well as the visual aspects (I put a lot of thought into the design and aesthetics).
Do you have any advice on how to present this work in a compelling way to people who understandably feels as burned out on AI slop as you do?
* some people want to show off a fun project/toy/product that they built because it's a business they're trying to start and they want to get marketing
* some people want to show off a fun project/toy/product that they built because it's involves some cool tech under the hood and they want to talk shop
* some people want to show off a fun project/toy/product that they built because it's a fun thing and they just want some people to have fun
I'm not an anti-AI luddite, but for gods sake talk about (ie. submit) something else!
Having too may subs could get out of hand, but sometimes you end up with so much paperwork generated so fast that it needs its own dedicated whole drawer in your filing cabinet ;)
It's still early and easy to underestimate the number of visitors who would absolutely love to have the main page more covered in absolute pure vibe than it is recently.
I would like to hear opinions as to why the non-human touch is preferred, that could add something that not many are putting into words.
Hopefully it's not a case of the lights being on but nobody's home :(
You'll be inventing a lot of novel cicular apparatus with a pivot and circumferencrial rubber absorbers for transportation and it'll take people serious efforts to convince you it's just a wheel.
I mean it's a real problem, but it's also a solved problem, and also not a problem that comes up a lot unless you're doing the sort of engineering where you're using a CAD tool already.
I don't doubt it's useful, and seems pretty well crafted what little I tried it, but it doesn't really invite much discussion.
Agreed. r/ProgrammingLanguages had to deal with this recently in the same way HN has to; people were submitting these obviously vibecoded languages there that barely did anything, just a deluge of "make me a language that does X", where it doesn't actually do X or embody any of the properties that were prompted.
One thing that was pointed out was "More often than not the author also doesn't engage with the community at all, instead they just share their project across a wide range of subreddits." I think HN is another destination for those kinds of AI slop projects -- I'm sure you could find every banned language posted on that forum posted here.
Their solution was to write a new rule and just ban them outright. Things have been going much better since.
https://www.reddit.com/r/ProgrammingLanguages/comments/1pf9j...
concur, perhaps a dedicated or alternative, itch.io like area named "Slop HN:..."
Raising the quality bar would likely cut down on quantity as a side effect, and that would be a nice solution. One idea that a user proposed is a review queue where experienced HN users would help new Show HN submitters craft their posts to be more interesting and fit HN's conventions more.
Also requiring disclosure of the use of AI in repos and especially (or perhaps specifically discouraging its use) when responding with comments to HN feedback.
I'll take this opportunity to strongly encourage sharing prompts (the newest tier of software source code) as the logical progression of OSS adding additional value to Show HN.
And yes, disclosing the use of AI should be par for the course.
So while I understand that new features on HN are few and far between, a quick validation of "Show HN" posts that says, "I see you are trying to post a Show HN..." with some concise explanation of the guidelines might help. I want to believe that most new users mean well, they just need better explanations.
From their perspective, HN is another place to post and get views on their project, part of a check list for their "launch" or whatever, not everything comes from within the ecosystem.
Some posts their projects then never reply to any of the comments, while for me (and many others I bet) half the reason of posting a Show HN is because I'm looking for participating in discussions about my thing and understanding different perspectives thinking about it too.
> I want to believe that most new users mean well, they just need better explanations.
Yeah, so far the only thing I know of is the "Please read the Show HN rules and tips before posting" blurb on the /show list, and the separate pages. Maybe some interstitial or similar if the title prefix-matches with "Show HN" could display the rules, guidelines and "netiquette" more prominently and get more people to be aware of it.
For example, in one project, PRs have to be submitted to the "next" branch and not the default branch. This is written in the CONTRIBUTING.md file, which is linked in the PR template, with the mention that PRs that don't respect that will be close. Most if not all submitters of low-quality PRs don't do anything once their initial PR is closed.
Pretty bummed about that as I just submitted a show HN I'm pretty happy about (it solves an annoying problem I had for years, which I know many people have) and I was looking forward to talk about it (https://news.ycombinator.com/item?id=47050872)
Most people did not read the post, which was immediately evident from how they posted their application by copy-pasting and editing an application posted by someone else before them.
Few things in life are as reliable and trustworthy as the laziness of others.
Show HN [NOAI]:
Since it's too controversial to ban LLM posts, and would be too easy for submitters to omit an [LLM] label... Having an opt in [NOAI] label allows people to highlight their posts, and LLM posts would be easy to flag to disincentivise polluting the label.This wouldn't necessarily need to be a technical change, just an intuitive agreement that posts containing LLM or vibe coded content are not allowed to lie by using the tag, or will be flagged... Then again it could also be used to elevate their rank above other show HN content to give us humanoids some edge if deemed necessary, or a segregated [NOAI] page.
[edit]
The label might need more thought, although "NOAI" is short and intelligible, it might be seen as a bit ironic to have to add a tag containing "AI" into your title. [HUMAN]?
Feels like effort needs to be the barrier (which unfortunately needs human review), not "AI or not". In lieu of that, 100 karma or account minimum age to post something as Show HN might be a dumb way to do it (to give you enough time to have read other people's so you understand the vibe).
Also its not uncommon for weekend projects to be done in a shprt span with just a "first commit" message dump even pre-AI.
So either we are going to completely avoid automation and create a community council to decide what deserves to be shown to rest of the community or just let best AI models to decide if a project is worth show up on front page?
Or we can do all of the above :)
I suspect automating "code base over time" metric is tricky. Not everyone will be using git or a vcs and somethings dont need a codebase to be shared.
Once some users have extra power to push content to the front-page, it will be abused. There will be attempts to gain that privilege in order to monetize, profit from or abuse it in some other way.
The only option along this path would probably be to keep the list of such users very tightly controlled and each vouched for individually.
---
Another approach might be to ask random users (above certain karma threshold) rank new submissions. Once in a while stick a showhn post into their front page with up and down arrows, and mark it as a community service. Given HN volume it should be easy to get an average opinion in a matter of minutes.Meaning you would have to demonstrate that you had or were willing to contribute to the HN community before just promoting your own stuff.
so, in the past, i've created throwaway HN accounts for sharing things that connect to my real ID.
e.g. [20h/2d/$10] could indicate "I spent 20 human-hours over 2 days and burned $10 worth of tokens" (it's hard to put a single-dimensional number on LLM usage and not everyone keeps track, but dollars seem like a reasonable approximation)
I wonder how will this review system work. Perhaps, a Show HN is hidden by default and visible to only experienced HN users who provide enough positive reviews for it to become visible to everyone else. Although, this does sound like gatekeeping to me and may starve many deserving Show HN before they get enough attention.
- Min. 90 days account existence in order to submit
- Cap on plain/Show/Ask HN posts per week
Most of the spam I see in /new or /ask is from fresh accounts. This approach is simple and awards long-term engagement/users while discouraging fly-by-night spammers.
Set a policy of X comments required per submission in the last 30 days (not counting last 24 hours) for all submissions, not just "Show HN:" posts.
Meaning, users would need to post X comments before they could post a submission and by not counting the last 24 hours, someone couldn't join, post X comments and immediately post a submission.
It would limit new submission posts to people who are active in the community so they would be more familiar with the policies and etiquette of HN along with gaining an idea of what interests its members.
One thing I noticed recently while going through several of the Show HN submissions was that a lot of the accounts had been created the same day the submission was made.
My guess is HN has become featured on a large number of "Where do I promote/submit my _____?" lists in blogs, social media, etc. to the point that HN is treated like a public bulletin board more than a place to share things with each other in the community.
I love the Show HN section because so many interesting things get posted there but even I have cut back on checking it lately because there are simply too many things posted to check out.
I hope they do something to improve it.
The clarity and focus this discipline would enforce could have a pleasant side effect of enabling a kind of natural evolution of categorizations, and alternative discovery UIs.
HN has a vouch system. Make a Show HN pool, allow accounts over some karma/age level to vouch them out to the main site. I recently had a naive colleague submit a Show HN a week or so ago that Tom killed... for good reason. I told the guy to ask me for advice before submitting a FOSS project he released and instead he shit out a long LLM comment nobody wants to read.
The HN guidelines IMO need a (long overdue) update to describe where a Show HN submission needs to go and address LLM comments/submissions. I get that YC probably wants to let some of it be a playground since money is sloshing around for it, but enough is enough.
hah that sounds like a Show HN incubator.
More seriously though, I think some sort of curation is unavoidable with such topics. If you get inspired by stack overflow where you have some similar mechanics at work, then I'd say that is not too bad. But of course you risk some people being angry about why their amazing vibe coded app is not being shown. Although the more I think of it, this might be a good thing.
Edit: One more thought just came to my mind. A slight modification to the curation rule, you let everything through, just like now. However, the posts are reviewed and those with enough postive review votes get marked in some shape or form, which allows them to be filtered and/or promoted on the show page.
It was not just a product launch for me. I was, sort-of in a crisis. I had just turned 40 and had dark thoughts about not being young, creative and energetic anymore. The outlook of competing with 20 year old sloptimists in the job market made me really anxious.
Upon seeing people enjoying my little game, even if it's just a few HNers, I found an "I still got it" feeling that pushed me to release on Steam, to good reviews.
It was never about the money, it was about recovering my self confidence. Thank you HN, I will return the favour and be the guy checking the new products you launch. If Show HN is drowning, i will drown with it.
Thank you for making it, and don't give up. Passion and vision > vibe coding sloptimists.
> sloptimists
That's a good one! Did you just come up with it? I've never seen it before.And as we all read more AI content and talk to chatbots, that will influence how we do our own writing as well, humans will start to sound more like LLMs.
As per the old efficient market jokes: https://news.ycombinator.com/item?id=28029044
One of those comments was genuinely useful feedback from Argentina about localization. That alone made it worth posting. But the post was gone from page 1 in what felt like minutes.
What's interesting is this isn't a weekend vibe-coded project - it involves actual physical production, printing, and shipping. But from the outside it probably looks like "another AI wrapper," which I think is the core problem: the flood of low-effort AI projects has made people reflexively skeptical of anything that mentions generation, even when there's real infrastructure behind it.
- Children's books, at least the well-reviewed ones, are pretty good
- This is AI generated, so I expect the quality to be significantly lower than a children's book. Flipping through the examples, I am not convinced that this will be higher quality than a children's book.
- At 20 euros for a paperback, this is also more expensive than most children's books
- Your value prop, as I take it, is that your product is better because it is a book generated for just one child, but I am not convinced that's a solid value prop. I mean, it is kind of an interesting gimmick, but the book being fully AI generated is a large negative, and the book being uniquely created for my kid is a relatively smaller positive.
Those are definitely the highiest-order bits you need to prove to me in order to get traction. A couple of smaller things you should fix as well:
- As an English speaker, almost all the examples are not in English. You should take a reasonable guess at my language and then show me examples in my language
- It's difficult to get started: "Create your own book" leads to a signup page and I don't want to go through that friction when I am already skeptical
You're right that children's books can be excellent, and for generic topics a well-reviewed book from a skilled author and illustrator will beat what we generate. No argument there.
Where we see real value is in the gaps the publishing industry doesn't serve. Bilingual families who can't find books in Maltese/English or Estonian/German. A child with an insulin pump who wants to see a superhero like them. A kid processing their parents' divorce. A child with two dads, or being adopted, or starting at a new school in a country where they don't speak the language yet. No publisher will print a run of one for these families - but these are exactly the stories that matter most to them.
On the UX points - you're right on both. We should localize the showcase to your language, and the signup wall before trying is too much friction. Working on both.
Give people the ability to submit a “Show HN” one year in advance. Specifically, the user specifies the title and a short summary, then has to wait at least year until they can write the remaining description and submit the post. The user can wait more than a year or not submit at all; the delay (and specifying the title/summary beforehand) is so that only projects that have been worked on for over a year are submit-able.
Alternatively, this can be a special category of “Show HN” instead of replacing the main thing.
It's like books. Old but still relevant books are the best books to read.
This tech industry is changing so fast though. Maybe a year is too much?
I am wary of blogs by celebrity software managers such as DHH, Jeff Atwood, Joel Spolsky, and Paul Graham because they talk as if there was something about their experience in software development and marketing except... there isn't.
The same is true for the slop posts about "How I vibe coded X", "How I deal with my anxiety about Y" and "Should I develop my own agentic workflow to do Z?" These aren't really interesting because there isn't anything I can take away from them -- doomscrolling X you might do better because little aphorisms like "Once your agent starts going in circles and you find yourself arguing it you should start a new conversation" is much more valuable than "evaluations" of agents where you didn't run enough prompts to keep statistics or a log of a very path-dependent experience you had. At least those celebrity managers developed a product that worked and managed to sell it, the average vibe coder thinks it is sufficient that it almost worked.
When I launched a side project a couple years ago, getting to the front page felt like a real achievement requiring weeks of iteration and genuine problem-solving. Now you can vibe-code something in a weekend and post it. The median Show HN quality has dropped, so people naturally vote less aggressively on the category as a whole.
The 37% stuck at 1 point stat is the real story. The solution is not changing HN mechanics. It is people being more selective about what they post - and the community being more willing to say "this is not ready" in the comments rather than just silently scrolling past.
Before, projects were more often carefully human crafted.
But nowadays we expect such projects to be "vibe coded" in a day. And so, we don't have the motivation to invest mental energy in something that we expect to be crap underneath and probably a nice show off without future.
Even if the result is not the best in the world, I think that what interest us is to see the effort.
> The post quickly disappeared from Show HN's first page, amongst the rest of the vibecoded pulp.
The linked article[0] also talks at length about the impact of AI and vibe-coding on indie craftsmanship's longevity.
[0] - https://johan.hal.se/wrote/2026/02/03/the-sideprocalypse/
I took a look at the project and it was a 100k+ LoC vibe-coded repository. The project itself looked good, but it seemed quite excessive in terms of what it was solving. It made me think, I wonder if this exists because it is explicitly needed, or simply because it is so easy for it to exist?
It's fair to give the audience a choice to learn about an AI-created product or not.
If I used LLMs to generate a few functions would I be eligible for it? What constitutes "built this with no/ minimal AI"?
Maybe we should have a separate section for 80%+ vibe coded / agent developed.
As dang posted above, I think it's better to frame the problem as "influx of low quality posts" rather than framing policies having to do explicitly with AI. I'm not sure I even know what "AI" is anymore.
So in future everything’s gonna be “agentic”, (un)fortunately.
Everytime I write about it, I feel like a doomsayer.
Anthropic admits that LLM use makes brain lazy.
So as we forgot remembering phone numbers after Google and mobile phones came, it will be probably with coding/programming.
One is where the human has a complete mental map of the product, and even if they use some code generating tools, they fully take responsibility for the related matters.
And there is another, emerging category, where developers don't have a full mental map as it was created by an LLM, and no one actually understands how it works and what does not.
I believe these are two categories that are currently merged in one Show HN, and if in the first category I can be curious about the decisions people made and the solutions they chose, I don't give a flying fork about what an LLM generated.
If you have a 'fog of war' in your codebase, well, you don't own your software, and there's no need to show it as yours. Same way, if you had used autocomplete, or a typewriter in the time of handwriting, and the thinking is yours, an LLM shouldn't be a problem.
I work with a large number of programmers who don't use AI and don't have an accurate mental map for the codebases they work in...
I don't think AI will make these folks more destructive. If anything, it will improve their contributions because AI will be better at understanding the codebase than them.
Good programmers will use AI like a tool. Bad programmers will use AI in lieu of understanding what's going on. It's a win in both cases.
Are the tokens to write out design documentation and lots of comments too expensive or something? I’m trying to figure out how an LLM will even understand what they wrote when they come back to it, let alone a human.
You have to reify mental maps if you have LLM do significant amounts of coding, there really isn’t any other option here.
"Oh, this library just released a new major version? What a pity, I used to know v n deeply, but v n+1 has this nifty feature that I like"
It happened all the time even as a solo dev. In teams, it's the rule, not the exception.
Vibing is just a different obfuscation here.
When you upgrade a library, you made that decision — you know why, you know what it does for you, and you can evaluate the trade-offs before proceeding (unless you're a react developer).
That's not a fog of war, that's delegation.
When an LLM generates your core logic and you can't explain why it works, that's a fundamentally different situation. You're not delegating — you're outsourcing the understanding, and that makes the result not yours.
The benefit of libraries is it's an abstraction and compartmentalization layer. You don't have to use REST calls to talk to AWS, you can use boto and move s3 files around in your code without cluttering it up.
Yeah, sometimes the abstraction breaks or fails, but generally that's rare unless the library really sucks, or you get a leftpad situation.
Having a mental map of your code doesn't mean you know everything, it means you understand how your code works, what it is responsible for, and how it interacts with or delegates to other things.
Part of being a good software engineer is managing complexity like that.
Case in point: aside from Tabbing furiously, I use the Ask feature to ask vague questions that would take my coworkers time they don't have.
Interestingly at least in Cursor, Intellisense seems to be dumbed down in favour of AI, so when I look at a commit, it typically has double digit percentage of "AI co-authorship", even though most of the time it's the result of using Tab and Intellisense would have given the same suggestion anyway.
This really bothers me, coming here asking for human feedback (basically: strangers spending time on their behalf) then dumping it into the slop generator pretending it is even slightly appreciated. It wouldn't even be that much more work to prompt the LLM to hide its tone (https://news.ycombinator.com/item?id=46393992#46396486) but even that is too much.
How many non-native English speakers are on HN? If it's more than 30%, why should they have to use a whole new language if they can just let an LLM do it in a natural sounding way.
Post both versions
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Some of it is "I wish things I think are cool got more upvotes". Fare enough, I've seen plenty of things I've found cool not get much attention. That's just the nature of the internet.
The other point is show and share HN stories growing in volume, which makes sense since it's now considerably easier to build things. I don't think that's a bad thing really, although curation makes it more difficult. Now that pure agentic coding has finally arrived IMO, creativity and what to build are significantly more important. They always were but technical ability was often rewarded much more heavily. I guess that sucks for technical people.
HN has a very different personality at weekends versus weekdays. I tend to find most of the stuff I think is cool or interesting gets attention at the weekends, and you'll see slightly more off the wall content and ideas being discussed, whereas the weekdays are notably more "serious business" in tone. Both, I think, have value.
So I wonder if there's maybe a strong element of picking your moment with Show HN posts in order to gain better visibility through the masses of other submissions.
Or maybe - but I think this goes against the culture a bit - Show HN could be its own category at the top. Or we could have particular days of the week/month where, perhaps by convention rather than enforcement, Show HN posts get more attention.
I'm not sure how workable these thoughts are but it's perhaps worth considering ways that Show HN could get a bit more of the spotlight without turning it into something that's endlessly gamed by purveyors of AI slop and other bottom-feeding content.
Chasing clout through these forums is ill advised. I think people should post, sure. But don't read into the response too much. People don't really care. From my experience, even if you get an insanely good response, it's short lived, people think its cool. For me it never resulted in any conversions or continued use. It's cheap to upvote. I found the only way to build interest in your product is organic, 1 on 1 communication, real engagement in user forums, etc.
The difference now is that there is even less correlation between "good readme" and "thoughtful project".
I think that if your goal is to signal credentials/effort in 2026 (which is not everyone's goal), a better approach is to write about your motivations and process rather than the artefact itself - tell a story.
I've launched multiple side projects through Show HN over the years. The ones that got traction weren't better products. They hit the front page during a slow news hour and got enough early upvotes to survive the ranking curve. The ones that flopped were arguably more interesting but landed during a busy cycle. That's not a Show HN problem, that's a single-ranking-pool problem.
What would actually help is a separate ranking pool for Show HN with slower time decay. Let projects sit visible for longer so the community can actually try them before they drop off. pg's original vision was about making things people want. Hard to evaluate that in a 90-minute window.
C'est la vie and que sera. I'm sure the artistic industry is feeling the same. Self expression is the computation of input stimuli, emotional or technical, and transforming that into some output. If an infallible AI can replace all human action, would we still theoretically exist if we're no longer observing our own unique universes?
Maybe if people did Show HN for projects that are useful for something? Or at least fun?
There's a disease on HN related with the latest fad:
- (now) "AI" projects
- (now) X but done with "AI"
- (now) X but vibecoded
- (less now, a lot more in the recent past) X but done in Rust
- (none now, quite a few in a more distant past) X but done with blockchain
If the main quality of the project is one of the above, why would it attract interest?
The thing in show HN has to do something to raise interest. If not even the author/marketer thinks it does something, why would anyone look at it?
Trane (good post): https://news.ycombinator.com/item?id=31980069
Pictures Are For Babies (lame post): https://news.ycombinator.com/item?id=45290805
It's only that you can't claim any of the top shelf prizes by vibe coding
I see no reason to disrespect your work from what you say, but I also see no reason that AI would be much help to you after you had been learning for a year. If you are in the loop, shouldn't this be just about the moment when your growing abilities start to easily outpace the model's fixed abilities?
If you do then not vibe coded.
For me, I have different levels of vibes:
Some testing/prototyping bash scripts 100% vibe coded. I have never actually read the code.
Sometimes early iterations, I am familiar with general architecture, but do not know exact file contents.
Sometimes I have gone through and practically rewrote a component from scratch either because it was too convoluted, did not have the perfect abstraction I wantet/etc.
For me the third category is not vibe coded. The first 2 are tech debt in the making.
Good question, and by the same "token" when does it start?
Maybe if there's no possible way the creator could have written it by hand, perhaps due to almost complete illiteracy to code in any language, or something like that, it would be a reference point for "pure vibe". If the project is impressive, that's still nothing to be ashamed of. Especially if people can see the source code.
All kinds of creative people I see are mostly no dummies and it might be better than nothing for them to honestly rate their own submissions somewhere on the scale from pure vibe to pure manual?
With no stigma regardless, and let the upvotes or downvotes from there give an indication of how accurate the self-assessments are. Voting directly to Show HN could even have a different "currency" [0] to help regulate the fall of Show submissions, where a single upvote could mean something like infinitely more than zero.
I'm not disappointed by a project purely vibed by somebody like a visual artist, storyteller, or business enthusiast who has never written a line of code, as long as it is astoundingly impressive, in the league of the better projects, those I would like to take a look at.
I also see real accomplished coders guide their agents to arrive at things that wouldn't be as nice if they didn't have years of advanced manual ability beforehand.
Plus I think I'm in the vast majority and have no interest in "slop", in a way that aligns with so many kinds of people who are also turned off.
But so far, the best definition we have for slop is "we know it when we see it".
Oh, well that's all I've got, so far :)
[0] slop vs non-slop which is like pass/fail, or even a numerical rating could be on the "ballot".
https://news.ycombinator.com/item?id=47026263
I attribute it mostly to my own inability to pitch something that is aimed for many audiences at once and needs more UX polishing and maybe a bit on timing.
It's tough when you're not looking to sell a product but moreso engage in a community without going the twitter/bluesky route (which I'll bregudgingly may start using).
Maybe evals is a problem that people don't have yet because they can just build their custom thing or maybe it needs a "hey, you're building agent skills, here's the mental model" (e.g. https://alexhans.github.io/posts/series/evals/building-agent... ) and once they get to the evals part, we start to interact.
In any case, I still find quite a lot of cool things in SHOW HN but the volume will definitely be a challenge going forward.
These days I guess we don't want a library? I can create an MIT-licensed repo with some charts you can point your AI agent to, if it helps?
The font is Gaegu.
The legend says SHNs are getting worse, but surely if the % of SHN posts with 1 point is going DOWN (as per graph) then it's getting better? Either I am dense or the legends are the wrong way round no?
Show HN: My Project - A description for my vibe coded project [3 weeks]
A lot of the good stuff I see on Show HN are projects that have been worked on for a long time. While I understand that vibe coding is newer trend, I also know that vibe coded projects are less likely to stand the test of time. With this, we don't have to worry about whether a project is AI assisted or not, nor do we ban it. Instead just incentivize longer term projects. If the developer lies about how long they worked on the project, they will get reported and downvoted into oblivion.
I did 3 ShowHN in 2024 (outside of the scope of this analysis), one with 306 points, another with 126 points and the third with... 2. There's always been some kind of unpredictability in ShowHN.
But I think the number one criteria for visibility is intelligibility: the project has to be easy to understand immediately, and if possible, easy to install/verify. IMHO, none of the three projects that the author complains didn't get through the noise qualify on this criteria. #2 and #3 are super elaborate (and overly specific); #1 is the easiest to understand (Neohabit) but the home page is heavy in examples that go in all directions, and the github has a million graphics that seem quite complex.
Simplify and thou shall be heard.
I'm wondering how much of it is portfolio building to keep or find a new job in a post-Ai coding world
Show HN: Clawntown – An Evolving Crustacean Island - https://news.ycombinator.com/item?id=47023255
Something rapid fire, fun, categorized maybe. Just a showcase to show off what you've done.
And the comments should start with the day/month the project was first launched.
Where the vibe coders with their slop cannons aren't present though is in things that require hard won domain knowledge. IE, stuff that requires you to actually create a new idea, off an understanding of actual areas of need.
And that kind of thing probably isn't going to do well on Show HN, because your audience probably isn't on HN.
I was a skeptic last year, and now... not so much. I am having Claude build me a distributed system from scratch. I designed it last week as I was admitting to myself the huge failure of my big "I love to code" project that I failed to get traction on.
It took me a week to even give the design to claude because I was afraid of what it meant. I started it last night, and my jaw is dropped. There is a new skill being grown right now, and it... is something.
It certainly isn't nothing, and I for one am curious to simply see what people are making with vibes alone. It's fascinating... and horrifying.
But, I have learned to silence that part of me that is horrified since the world never cared for what I find beautiful (i.e. terrible languages like JavaScript)...
Vibe coding is not helping either, I guess. Now it is even cheaper to create assets for the distribution channel.
I think same thing happened with product hunt.
You could argue it's dead in the sense of "dead internet theory". Yes, more projects than ever are being submitted, but they were not created by humans. Maybe they are being submitted by humans, for now.
I think that Show HN should be used sparingly. It feels like collective community abuse of it will lead to people filtering them out mentally, if not deliberately. They're very low signal these days.
Not everything has to revolve around HN.
Even before AI got so strong, some of the translations were fairly abnormal in their own way.
>The post body is supposed to be part of the human connection element!
I really think this is the best too :)
Maybe for the non-English speakers, or anyone really, if a project means a lot, have a number of people who are smart in different ways look over the text a number of times and help you edit beforehand.
To make sure it's what you the human want to really say at the time.
That would be the pg way.
The market is saturated with superficial solutions that look amazing at a glance but don't work at all in the medium or long term yet it doesn't matter at all; they don't even have an incentive to improve, ever, because the founder cashes out/exits before they need to worry about the stuff under the hood. Customer support is replaced by AI agents so nobody can feel the customer's pain anymore. Then investors find ways to financialize the product so that it doesn't depend on consumers anymore and can just tap into big contracts from big institutions... And yet they still spend big on ads, just to prevent new entrants from entering the market.
It reminds me of my time in crypto; the coins were sold as one thing but all the big well known projects barely had less than half of the features implemented (compared to what was advertised)... And 10 years in, most of those projects cashed out big time and still don't have the features promised. Many shut down completely. Doesn't matter. The whole thing existed and succeeded as a pure shell project.
Horrible industry. Do not participate.
It is a comeback from a post that stayed for a few hours in the front page a few years ago. Also, it is a useful, non-AI slop, free product. So when it got none upvotes it made me think how I don’t understand HN community anymore how I used to think I did.
Here is the post for the curious
Show HN: (the return of) Read The Count of Monte Cristo and others in your email
I linked one of my projects in a post and it got some really good responses. I did a bit more work on it and posted a Show HN thinking a few people might be interested but it got 0 traction.
I even made it a point to go on the new Show HN and checkout some peoples projects (how can I expect anyone to check mine out if I'm not doing the same) and it is hard to keep up.
I have another app that I've been working on for the past 3 months and whilst I want to do a Show HN to discuss how I built it, the moments I was banging my head on the wall working on a bug etc, I sadly wonder if there's any point.
Yet most of the time, if I spend five minutes a day on Show HN, I’ll find something new that I find interesting. I wouldn’t say that Show HN is drowning, but creativity should be on life support. I’m sure that’s somewhat a generative AI problem, but they’re pretty good rubber ducks and so I’m surprised by how acute the issue has gotten so quickly.
If this effect is noticeable on an obscure tech forum, one can only imagine the effect on popular source code forges, the internet at large, and ultimately on people. Who/what is using all this new software? What are the motivations of their authors? Is a human even involved in the creation anymore? The ramifications of all this are mind-boggling.
It feels like the age of creating some cool new software on your own to solve a problem you had, sharing it and finding other people who had the same problem, and eventually building a small community around it is coming to a close. The death of open source, basically.
Having said that, it used to feel part of an exclusive club to have the skills and motivation to put a finished project on HN. For me, posting a Show HN was a huge deal - usually done after years of development - remember that - when development of something worthwhile took years and was written entirely by hand?
I don't mind much though - I love that programming is being democratized and no longer only for the arcane wizards of the back room.
Programming has long been democratized. It’s been decades now where you could learn to program without spending a dollar on a university degree or even a bootcamp.
Programming knowledge has been freely available for a long time to those who wanted to learn.
In the future it will seem very strange that there was a time when people had to write every line of code manually. It will simply be accepted that the computers write computer programs for you, no one will think twice about it.
Somewhere right now there's a complete greehorn vibecoder who's saying "hold my beer" ;)
While they proceed to learn everything they can about the code that the LLM generated for them.
For the next few years, and never come back to drink the rest of the beer :)
The obvious counterpoint is that AO3 is brilliant, which it is: give people a way to ontologize themselves and the result is amazing. Sure, AO3 has some sort of make-integer-go-up system, but it reveals the critical defect in “Show HN”: one pool for all submissions means the few that would before have been pulled out by us lifeguards are more likely to drown, unnoticed, amidst the throngs. HN’s submissions model only scales so far without AO3’s del.icio.us-inherited tagging model. Without it, tool-assisted creative output will increasingly overwhelm the few people willing to slog through an untagged Show HN pool. Certainly I’m one of them; at 20% by weight AI submissions per 12 hours in the new feed alone, heavily weighted in favor of show posts, my own eyes and this post’s graphs confirm that I am right to have stopped reading Show HN. I only have so much time in my day, sorry.
My interest in an HN post, whether in new or show or front page, is directly proportional to how much effort the submitter invested in it. “Clippy, write me a program” is no more interesting than a standard HN generic rabble-rousing link to a GotHub issue or a fifty-page essay about some economics point that could have been concisely conveyed in one. If the submitter has invested zero personal effort into whatever degree of expression of designcraft, wordcraft, and code craft that their submission contains, then they have nothing to Show HN.
In the rare cases when I interact with a show post these days, I’ve found the submissions to be functionally equivalent to an AI prompt: “here’s my idea, here’s my solution, here’s my app” but lacking any of the passion that drives people to overcome obstacles at all. That’s an intended outcome of democratization, and it’s also why craft fairs and Saturday markets exercise editorial judgment over who gets a booth or not. It’s a bad look for the market to be filled with sellers who have a list of AI-generated memes and a button press, whose eyes only shine when you take out your wallet. Sure, some of the buttons might be cool, but that market sucks to visit.
Thus, the decline of Show HN. Not because of democratization of knowledge, but because lowering the minimum effort threshold to create and post something to HN reveals a flaw-at-scale of community-voting editorial model: it only works when the editorial community scales as rapidly as submissions, which it obviously has not been.
Full-text search tried to deprecate centralized editorial effort in favor of language modeling, and turned out to be a disastrous failure after a couple decades due to the inability of a computer to distinguish mediocre (or worse) from competent (or better). HN tried to deprecate centralized editorial effort and it has survived well enough for quite some time, but gestures at Show HN trends graphs it isn’t looking good either. Ironically, Reddit tried to implement centralized moderation on a per-community basis — and that worked extremely well for many years, until Reddit rediscovered why corporations of the 90s worked so hard to deprecate editorial staff, when their editors engaged in collective action against management (something any academic journal publisher is intimately familiar with!).
In that light, HN’s core principle is democratizing editorial review — but now that our high-skill niche is no longer high-skill, the submissions are flooding in and the reviewers are not. Without violating the site’s core precepts of submission egality and editorial democracy, I see no way that HN can reverse the trend shown by OP’s data. The AO3 tagging model isn’t acceptable as it creates unequal distinctions between submissions and site complexity that clashes with long-standing operator hostility towards ontologies. The Reddit and acsdemic journal editorial models aren’t acceptable as it creates unequal distinction between users and editors that clashes with long-standing operator hostility towards exercising editorial authority over the importance of submissions. And HN can’t even limit Show HN submissions to long-standing or often-participating users because that would prevent the exact discoveries of gems in the rough that show used to be known for.
The best idea I’ve got is, like, “to post to Show HN, you must make several thoughtful comments on other Show HN posts”, which puts the burden of editorial review into the mod team’s existing bailiwick and training, but requires some extra backend code that adds anti-spam logic, for example “some of your comments must have been upvoted by users who have no preexisting interactions with your comments and continued participating on the site elsewhere after they upvoted you” to exclude the obvious attack vectors.
I wouldn’t want to be in their shoes. A visionary founded left them a site whose continuing health turn out to hinge upon creating things being difficult, and then they got steamrolled by their own industry’s advancements. Phew. Good luck, HN.
This is still possible. Vibe coders are just not interested in working on a piece of software for years till it's polished. It's a self selection pattern. Like the vast amount of terrible VB6 apps when it came out. Or the state of JS until very recently.
Just saw one go from first commit to HN in 25m