upvote
One of the great benefits of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.

One of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.

It used to be that ShowHN was a filter: in order to show stuff, you had to have done work. And if you did the work, you probably thought about the problem, at the very least the problem was real enough to make solving it worthwhile.

Now there's no such filter function, so projects are built whether or not they're good ideas, by people who don't know very much

reply
People who got "enabled" by AI to produce stuff, just need to learn to keep their "target audience of one"-projects to themselves. Right now it feels like those fresh parents who show every person they meet the latest photos / videos of their baby, thinking everybody will find them super cute and interesting.
reply
Yeah, I think it's sort of an etiquette thing we haven't arrived at yet.

It's a bit parallel to that thing we had in 2023 where dinguses went into every thread and proudly announced what ChatGPT had to say about the subject. Consensus eventually become that this was annoying and unhelpful.

reply
Tell that to linkedin group. they keep doing that, they dont credit it, but i assume at least 60% of other people can tell.
reply
At times, it seems like the only thing that has changed is that the dinguses don't bother crediting ChatGPT.
reply
>went into every thread and proudly announced what ChatGPT.

That is what Show HN has become. Nobody cares what code Claude shat in response to a random person prompt. If I cared, I would be prompting Claude myself.

reply
Some folks definitely give off a "How do you do, fellow coders?" vibe
reply
deleted
reply
Like when Instagram / digital photography, that is not what you will get but you will see a lot of revealing body parts.
reply
More like the fresh parents who start schooling everyone else on how to parent…
reply
The other element here is that the vibecoder hasn't done the interesting thing, they've pulled other people's interesting things.

Let's see, how to say this less inflamatory..

(just did this) I sit here in a hotel and I wondered if I could do some fancy video processing on the video feed from my laptop to turn it into a wildlife cam to capture the birds who keep flying by.

I ask Codex to whip something up. I iterate a few times, I ask why processing is slow, it suggests a DNN. I tell it to go ahead and add GPU support while its at it.

In a short period of time, I have an app that is processing video, doing all of the detection, applying the correct models, and works.

It's impressive _to me_ but it's not lost on me that all of the hard parts were done by someone else. Someone wrote the video library, someone wrote the easy python video parsers, someone trained and supplied the neural networks, someone did the hard work of writing a CUDA/GPU support library that 'just works'.

I get to slap this all together.

In some ways, that's the essence of software engineering. Building on the infinite layers of abstractions built by others.

In other ways, it doesn't feel earned. It feels hollow in some way and demoing or sharing that code feels equally hollow. "Look at this thing that I had AI copy-paste together!"

reply
To me, part of what makes it feel hollow is that if we were to ask you about any of those layers, and why they were chosen or how they worked, you probably would stumble through an answer.

And for something that is, as you said, impressive to you, that's fine! But the spirit of Show HN is that there was some friction involved, some learning process that you went through, that resulted in the GitHub link at the top.

reply
Idk.

I saw this come out because my boss linked it as a faster chart lib. It is ai slop but people loved it. [https://news.ycombinator.com/item?id=46706528]

I knew i could do better so i made a version that is about 15kb and solves a fundamental issue with web gl context limits while being significantly faster.

AI helped do alot of code esp around the compute shaders. However, i had the idea of how to solve the context limits. I also pushed past several perf bottlenecks that were from my fundamental lack of webgpu knowledge and in the process deepened my understanding of it. Pushing the bundle size down also stretched my understanding of js build ecosystems and why web workers still are not more common (special bundler setting for workers breaks often)

Btw my version is on npm/github as chartai. You tell me if that is ai slop. I dont think it is but i could be wrong

reply
I have yet to see any of these that wouldn’t have been far better off self-hosting an existing open source app. This habit of having an LLM either clone (or even worse, cobble together a vague facsimile) of existing software and claiming it as your own is just sort of sad.
reply
I actually came to this realization recently. I'm part of a modding community for a game, and we are seeing an influx of vibe coded mods. The one distinguishing feature of these is that they are entirely parasitic. They only take, they do not contribute.

In the past, new modders would often contribute to existing mods to get their feet wet and quite often they'd turn into maintainers when the original authors burnt out.

But vibe coders never do this. They basically unilaterally just take existing mods' source code, feed this into their LLM of choice and generate a derivative work. They don't contribute back anything, because they don't even try to understand what they are doing.

Their ideas might be novel, but they don't contribute in any way to the common good in terms of capabilities or infrastructure. It's becoming nigh impossible to police this, and I fear the endgame is a sea of AI generated slop which will inevitably implode once the truly innovative stuff dies and and people who actually do the work stop doing so.

reply
That's the essence of the corporations behind these commercial products as well. Leech off of all the work of others and then sell a product that regurgitates that said work without attribution or any back contribution.
reply
[dead]
reply
To be fair, one probably needs at least one idea in order to build stuff even with AI. A prompt like "write a cool computer program and tell me what it does" seems unlikely to produce something that even the author of that prompt would deem worthy of showing to others.
reply
Sometimes 'gatekeeping' is a good thing.
reply
It often is. The concept of "gatekeeping" becoming well known and something people blindly rail against was a huge mistake. Not everything is for everyone, and "gatekeeping" is usually just maintaining standards.
reply
Ideally the standard would just be someone's genuine interest in a project or a hobby. In the past, taking the effort to write code often was sufficient proof of that.

AI agent coding has introduced to writing software a sort of interaction like what brands have been doing to social media.

reply
I think the word you're looking for is curation. Which people who don't pass jury might call gatekeeping.
reply
I'm not sure what distinction you're trying to make, but it seems like you might be distinguishing between keeping out substandard work versus keeping out the submitters.

In which case, I kinda disagree. Substandard work is typically submitted by people who don't "get it" and thus either don't understand the standard for work or don't care about meeting it. Either way, any future submission is highly likely to fail the standard again and waste evaluation time.

Of course, there's typically a long tail of people who submit one work to a collection and don't even bother to stick around long enough to see how the community reacts to that work. But those people, almost definitionally, aren't going to complain about being "gatekept" when the work is rejected.

reply
Agreed, and were gonna see this everywhere that AI can touch. Our filter functions for books, video, music, etc are all now broken. And worst of all that breaking coincides with an avalanche of slop, making detection even harder.

There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.

It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.

reply
My prediction is that reputation will be increasingly important, certain credentials and institutions will have tremendous value and influence. Normal people will have a hard time breaking out of their community, and success will look like acquiring the right credentials to appear in the trusted places.
reply
That's been the trajectory for at least the last 100 years, an endless procession of certifications. Just like you can no longer get a decent-paying blue collar job without at least an HS diploma or equivalent, the days of working in tech without a university education are drying up and have been doing so for a while now.
reply
This isn't new- it's been happening for decades.
reply
Not new. No. But will be more.
reply
Maybe my expensive university degree was worth it after all
reply
The recent past was a nice respite from a strict caste system, but I guess we’re going back.
reply
I think the recent past was a respite in very specific contexts like software maybe. Others, like most blue collar jobs, were always more of an apprentice system. And, still others, like many branches of engineering, largely required degrees.
reply
I have a sci-fi series I've followed religiously for probably 10 years now. It's called the 'Undying Mercenaries' series. The author is prolific, like he's been putting out a book in this series every 6 months since 2011. I'm sure he has used ghost writers in the past, but the books were always generally a good time.

Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.

I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.

reply
Yes, I have not bought a few books after reading their free chapters and getting suspicious.

Honestly: there is SO much media, certainly for entertainment. I may just pretend nothing after 2022 exists.

reply
When I do YouTube searches I tend to limit the search to video’s prior to 2022 for this reason.
reply
People will build AI 'quality detectors' to sort and filter the slop. The problem is of course it won't work very well and will drown all the human channels that are trying to curate various genres. I'm not optimistic about things not all turning into a grey sludge of similar mediocre material everywhere.
reply
Is there a way to have a social media platform with hand-written letters, sent with ravens? That's AI proofed... for a while at least!
reply
Exactly, and we will have those who will "game" the "detectors" like they already "game" the social media "algorithms" :\
reply
"One of the great benefits of AI tools, is they allow people to build stuff, even if they have no ideas or knowledge."

Wait, what? That's a great benefit?

reply
Sure, there's many examples (I have a few personal ones as well) where I'm just building small tools and helpers for myself which I just wouldn't have done before because it would take me half a day. Or non-technical people at work that now just build some macros and scripts for Google Sheets that they would've never done before to automate little things.
reply
deleted
reply
For those who want to have the stuff built, yes, it absolutely is.
reply
I am being slightly sarcastic
reply
deleted
reply
> so projects are built whether or not they're good ideas

Let’s be honest, this was always the case. The difference now is that nobody cares about the implementation, as all side projects are assumed to be vibecoded.

So when execution is becoming easier, it’s the ideas that matter more…

reply
The appearance of execution is much easier, quality execution (producing something anybody wants to use) might be easier, maybe not.
reply
This is something that I was thinking about today. We're at the point where anyone can vibe code a product that "appears" to work. There's going to be a glut of garbage.

It used to be that getting to that point required a lot of effort. So, in producing something large, there were quality indicators, and you could calibrate your expectations based on this.

Nowadays, you can get the large thing done - meanwhile the internal codebase is a mess and held together with AI duct-tape.

In the past, this codebase wouldn't scale, the devs would quit, the project would stall, and most of the time the things written poorly would die off. Not every time, but most of the time -- or at least until someone wrote the thing better/faster/more efficiently.

How can you differentiate between 10 identical products, 9 of which were vibecoded, and 1 of which wasn't. The one which wasn't might actually recover your backups when it fails. The other 9, whoops, never tested that codepath. Customers won't know until the edge cases happen.

It's the app store affect but magnified and applied to everything. Search for a product, find 200 near-identical apps, all somehow "official" -- 90% of which are scams or low-effort trash.

reply
one of the base44 ads is hilarious about this

https://www.youtube.com/watch?v=kLdaIxDM-_Y

reply
The filter used to be effort. You had to care enough to spend weeks on something, which meant you probably understood the problem deeply. Now that filter is gone and we get a flood of "I prompted this in 20 minutes" posts where the author can't answer a single follow-up about their own code. The interesting Show HNs still exist, they're just buried under noise.
reply
” The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.”

— Tom Cargill, Bell Labs

Some day I’m going to get a crystal ball for statistics. Getting bored with a project was always a thing— after the first push, I don’t encounter like 80% of my coding side projects until I’m cleaning— but I’ll bet the abandonment rate for side projects has skyrocketed. I think a lot of what we’re seeing are projects that were easy enough to reach MVP before encountering the final 90% of coding time, which AI is a lot less useful for.

reply
> I’ll bet the abandonment rate for side projects has skyrocketed

My experience is the opposite. It’s so much easier to have an LLM grind the last mile annoyances (e.g. installing and debugging compilation bullshit on a specific raspberry pi + unmaintained 3p library versions.)

I can focus on the parts I love, including writing them all by hand, and push the “this isn’t fun, I’d rather do something else” bits to a minion.

reply
> including writing them all by hand, and push the “this isn’t fun, I’d rather do something else” bits to a minion.

That’s not really the part I’m talking about. My gut says that if tests are a blocker for weekend projects, people just don’t bother writing them. I certainly wouldn’t imagine them taking much longer to code than the core functionality.

In my experience, which seems to resonate with a lot of people, AI quickly stands up really useful boilerplate and very convenient purpose-built scaffolding… but is a lot less useful helping you solve actual problems in a way that makes sense to people that have those problems. Especially if you’re using a less-mainstream language or some other component.

reply
You both have very good points here, but once I get finished with both of the 90% programming times, and everything seems to finally work with no more bugs (and it's true), then for my heavy industry work I look forward to spending 10X as much effort testing compared to coding.
reply
Oh yeah, especially if the domain is complex, trying to envision how it can fail is as fun of a puzzle as trying to make it correct.
reply
I think that's a fear I have about AI for programming (and I use them). So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces. Though we can build commercial products quickly and easily, no one really writes code for difficult problem spaces so no one builds up expertise in important subdomains for a generation. Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects? How will AI be able to do new things well if it was trained on what people wrote previously and no one writes novel code themselves? It seems to me like AI is pretty dependent on having a corpus of human made code so, for example, I am not sure if it will be able to learn how to write very highly optimized code for some ISA in the future.
reply
> Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects?

I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?

reply
The general answer is what they’re already doing: ignoring the facts and riding the wave.
reply
Is it hard to imagine that things will just stay the same for 20-30 years or longer? Here is an example of the B programming language from 1969, over 50 years ago:

  printn(n,b) {
   extrn putchar;
   auto a;

   if(a=n/b) /* assignment, not test for equality */
      printn(a, b); /* recursive */
   putchar(n%b + '0');
  }
You'd think we'd have a much better way of expressing the details of software, 50 years later? But here we are, still using ASCII text, separated by curly braces.
reply
I observed this myself at least 10 years ago. I was reflecting on what I had done in the approximately 30 years I had been programming at that time, and how little had fundamentally changed. We still programmed by sitting at a keyboard, entering text on a screen, running a compiler, etc. Some languages and methodologies had their moments in the sun and then faded, the internet made sharing code and accessing documentation and examples much easier, but the experience of programming had changed little since the 1980s.
reply
>So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces.

I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.

reply
Exactly. Prose, code, visual arts, etc. AI material drowns out human material. AI tools disincentivize understanding and skill development and novelty ("outside the training distribution"). Intellectual property is no longer protected: what you publish becomes de facto anonymous common property.

Long-term, this is will do enormous damage to society and our species.

The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.

LLMs are highly susceptible to poisoning attacks. This is their "Achilles' heel". See: https://www.anthropic.com/research/small-samples-poison

We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.

There is strong, widespread support for this hostile posture toward AI. For example, see: https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...

Join us. The war has begun.

reply
This will happen regardless. LLMs are already ingesting their own output. At the point where AI output becomes the majority of internet content, interesting things will happen. Presumably the AI companies will put lots of effort into finding good training data, and ironically that will probably be easier for code than anything else, since there are compilers and linters to lean on.
reply
I've thought about this and wondered if this current moment is actually peak AI usefulness: the snr is high but once training data becomes polluted with it's own slop things could start getting worse, not better.
reply
I was wondering if anyone was doing this after reading about LLMs scraping every single commit on git repos.

Nice. I hope you are generating realistic commits and they truly cannot distinguish poison from food.

reply
Refresh this link 20 times to examine the poison: https://rnsaffn.com/poison2/

The cost of detecting/filtering the poison is many orders of magnitude higher than the cost of generating it.

reply
AI will be trained on the code it wrote, our feedback on that code, and the final clean architecture(?) working(?) result after that feedback.
reply
I see a lot of projects repeated: screen capture tool, LLM wrapper, blog/newsletter, marketing tool for reddit/twitter, manage social media accounts. These things have been around for a while so it is really easy for an LLM to spit them out for someone that does not know how to code.
reply
It's because the common belief that you should build copies of whatever SaaS makes decent money. What they don't mention is that people need to have a very good reason why they decide to go for your bare-bones MVP instead of a well-established solution.
reply
Agreed. I'm over here working on Quake 2 mods and reverse engineering Off world trading company so I can finish an open source server for it using AI.

Thing is I worked manually on both of these a lot before I even touched Claude on them so I basically was able to hit my wishlist items that I don't have time to deal with these days but have the logic figured out already.

reply
As someone who posts blogs and projects out of my own enjoyment, no AI for code generation, handed edited blog, I still have no idea how to signal to people that I actually know what I’m talking about. Every step of the process could’ve been done by an LLM, albeit worse, so I don’t have a way of signifying my projects as something different. Considering putting a “No LLMs used in this project” tag at the start but that feels a little tacky.
reply
Communicating that you know what you are talking about and that you're different is a lot of work. I think being visibly "anti-AI" makes you look as much of an NPC as someone who "vibe coded XYZ." It takes care, consistency and most of all showing people something they've never seen before. It also helps to get in the habit of doing in person demos, if you want to win hackathons it really helps to be good at (1) giving demos on stage and (2) have a sense of what it takes to make something that is good to demo.

I have two projects right now on the threshold of "Show HN" that I used AI for but could have completed without AI. I'm never going to say "I did this with AI". For instance there is this HR monitor demo

https://gen5.info/demo/biofeedback/

which needs tuning up for mobile (so I can do an in-person demo to people who work on HRV) but most all being able to run with pre-recorded data so that people who don't have a BTLE HR monitor can see how cool it is.

Another thing I am tuning up for "never saw anything like this" impact is a system of tokens that I give people when I go out as-a-foxographer

https://mastodon.social/@UP8/116086491667959840

I am used to marketing funnels having 5% effectiveness and it blows my mind that at least 75% of the tokens I give out get scanned and that is with the old conventional cards that have the same back side. The number + suit tokens are particularly good as a "self-working demo" because it is easy to talk about them, when somebody flags me down because they noticed my hood I can show them a few cards that are all different and let them choose one or say "Look, you got the 9 of Bees!"

reply
"This repository contains only [Organic] and [Hand-Made] ingredients."

It seems silly, but I know I'm more likely to review an implementation if can learn more about the author's state of mind by their style.

reply
I had a similar thought way back when. It goes back to what is important to the person reviewing it be it the style, form or just whether it works for their use case. In the case of organic food, I did not even know I was living living a healthy lifestyle until I came to US. But now organic is just another label, played by marketing people just like anything else.

As I may have noted before, humans are the problem.

reply
I added the following at the top of the blog post that I wrote yesterday: "All words in this blog post were written by a human being."

I don't particularly care if people question that, but the source repo is on GitHub: they can see all the edits that were made along the way. Most LLMs wouldn't deliberately add a million spelling or grammar mistakes to fake a human being... yet.

As for knowing what I'm talking about. Many of my blog posts are about stuff that I just learned, so I have many disclaimers that the reader should take everything with a grain of salt. :-) That said: I put a ridiculous amount of time in these things to make sure it's correct. Knowing that your stuff will be out there for others to criticize if a great motivator to do your homework.

reply
You’re not actually at risk of being labeled as LLM user until someone comes and make that claim about your work. So my advice is to not try to fight a preemptive battle on your tone and adjust when/if that day comes.

Side note: I’d think installing Anubis over your work would go a long way to signaling that but ymmv.

reply
> I still have no idea how to signal to people that I actually know what I’m talking about.

presumably if this is true, it should be obvious by the quality of your product. If it isnt, then maybe you need to need to rethink the value of your artisanal hand written code.

reply
I think that the problem is that LLMs are good at making plausible-looking text and discerning if a random post is good or bad requires effort. And it's really bad when signal-to-noise ratio is low, due to slop being easier to make.
reply
My favorite part about people promoting (and probably vote stuffing) their closed-source non-free app that clone other apps is when people share the superior free alternatives in the comments.
reply
> a tool is a tool

> author (pilot?) hasn't generally thought too much about the problem space

I’ve stopped saying that “AI is just a tool” to justify/defend its use precisely because of this loss of thought you highlight. I now believe the appropriate analogy is “AI is delegation”.

So talking to the vibe coder that’s used AI is like talking to a high level manager rather than the engineer for human written code

reply
I predict that now that coding has become a commodity, smart young people drawn to technical problem-solving will start choosing other career paths over programming. I just don't know which ones, since AI seems to be commoditizing every form of engineering work.
reply
When I was growing up (millennial) it seemed to me that the default for smart young people drawn to technical problem solving was something like aerospace, software or hardware was more or less a fun hobby, like it was for Steve Wozniak. Nobody cared whether or which of these were a commodity, which is what happens when you actually enjoy something.

These days I do see a lot of people choosing software for the money. Notably, many of them are bootcamp graduates and arguably made a pivot later in life, as opposed to other careers (such as medicine) which get chosen early. Nothing wrong with that (for many it has a good ROI), but I don’t think this changed anything about people with technical hobbies.

When you’re young, you tend not to choose the path the rest of your life will take based on income. What your parents want for you is a different matter…

reply
i think that there are a few distinct usecases for ShowHN that lead to conflicting visions:

* some people want to show off a fun project/toy/product that they built because it's a business they're trying to start and they want to get marketing

* some people want to show off a fun project/toy/product that they built because it's involves some cool tech under the hood and they want to talk shop

* some people want to show off a fun project/toy/product that they built because it's a fun thing and they just want some people to have fun

reply
I have a project that I'm hoping to launch on show HN in the next few days which was built entirely with the help of AI agents.

It's taken me about month; currently at ~500 commits. I've been obsessed with this problem for ~6 weeks and have made an enormous amount of progress, but admittedly I'm not an expert in the domain.

Being intentionally vague, because I don't want to tip my hand until it's ready. The problem is related to an existing open source tool in a particular scientific niche which flatly does not work on an important modern platform. My project, an open source repo, brings this important legacy tool to this modern platform and also offers a highly engaging visual demo that is of general interest, even to a layperson not interested in programming or this particular scientific niche.

I genuinely believe I have something valuable to offer to this niche scientific community, but also as a general interest and curiosity to HN for the programming aspects (I put a lot of thought into the architecture) as well as the visual aspects (I put a lot of thought into the design and aesthetics).

Do you have any advice on how to present this work in a compelling way to people who understandably feels as burned out on AI slop as you do?

reply
we need a Vibe HN
reply
Prompter News
reply
Hacker Slop
reply
Slop is a 4 letter word around here apparently
reply
[dead]
reply
This, but unironically. Every submission should have an "AI?" checkbox, to indicate if the content submitted is about AI or made by AI, because I'm just absolutely fed up with 2/3 of HN front page being slop or meta-slop.

I'm not an anti-AI luddite, but for gods sake talk about (ie. submit) something else!

reply
listen_to_what_the_man_said.stm
reply
That may be something.

Having too may subs could get out of hand, but sometimes you end up with so much paperwork generated so fast that it needs its own dedicated whole drawer in your filing cabinet ;)

reply
Sorry about that, didn't mean to hurt anybody's feelings :(

It's still early and easy to underestimate the number of visitors who would absolutely love to have the main page more covered in absolute pure vibe than it is recently.

I would like to hear opinions as to why the non-human touch is preferred, that could add something that not many are putting into words.

Hopefully it's not a case of the lights being on but nobody's home :(

reply
Sorry you got downvoted. People downvote for the most inane reasons here. Don't take it personally!
reply
I had a light bulb come on reading your comment. Yes! When I read Show HN posts that are clearly missing key information, it makes me care less because the author didn’t care to learn the space they’d like to play in.
reply
One thing about vibe coding is that unless you are an expert in what you have vibe coded, you have no idea if it actually works properly, and it probably doesn't.
reply
Worse yet, if you're not an expert(with autodidacts potentially qualifying), your ideas won't be original anyway.

You'll be inventing a lot of novel cicular apparatus with a pivot and circumferencrial rubber absorbers for transportation and it'll take people serious efforts to convince you it's just a wheel.

reply
In most domains, working on a project for a few years will make you an expert.
reply
Working? Maybe. Prompting? Unlikely.
reply
And in some other domains it takes a few decades to get to the top technically, not just a few years.
reply
I shared a well-thought vibecoded app on ShowHN last month. It took a few hours to get POC and two weeks to fully develop a product to meet my requirements. Nobody cared.
reply
You’re part of the problem.
reply
The problem is we're all stuck in a cut throat game of musical chairs in an eroding industry, with almost all organic platforms locked down and billion dollar orgs trying their damnedest to funnel you into pay2play.
reply
Sqfty?

I mean it's a real problem, but it's also a solved problem, and also not a problem that comes up a lot unless you're doing the sort of engineering where you're using a CAD tool already.

I don't doubt it's useful, and seems pretty well crafted what little I tried it, but it doesn't really invite much discussion.

reply
> I think the vibe coded show HN projects are overall pretty boring.

Agreed. r/ProgrammingLanguages had to deal with this recently in the same way HN has to; people were submitting these obviously vibecoded languages there that barely did anything, just a deluge of "make me a language that does X", where it doesn't actually do X or embody any of the properties that were prompted.

One thing that was pointed out was "More often than not the author also doesn't engage with the community at all, instead they just share their project across a wide range of subreddits." I think HN is another destination for those kinds of AI slop projects -- I'm sure you could find every banned language posted on that forum posted here.

Their solution was to write a new rule and just ban them outright. Things have been going much better since.

https://www.reddit.com/r/ProgrammingLanguages/comments/1pf9j...

reply
> the vibe coded show HN projects are overall pretty boring

concur, perhaps a dedicated or alternative, itch.io like area named "Slop HN:..."

reply
[dead]
reply