Except it's "...and here is the first result it gave me, I didn't bother looking further".
Web search has the same issue. If you don't validate it, you wind up in the same problem.
The social norm has always been that you write comments on the internet for yourself, not others. Nothing really changes if you now find enjoyment in adding AI output to your work. Whatever floats your boat, as they say.
It seems a lot like code. You can "vibe code" your way into an ungodly mess, but those who used to enjoy the craft of writing high quality code before LLMs arrived still seem to insist on high quality code even if an LLM is helping produce it now. It is highly likely that internet comments are no different. Those who value quality will continue to. Those who want garbage will produce it, AI or not.
Much more likely is seeing the user base shift over time towards users that don't care about quality. Many a forum have seen that happen long before LLMs were a thing, and it is likely to happen to forums again in the future. But, the comments aren't written for you (except your own, of course) anyway, so... It is not rational to want to control what others are writing for themselves. But you can be responsible for writing for yourself what you want to see!
Sure, the motivation for many people to write comments is to satisfy themselves. The contents of those comments should not be purely self-satisfying, though.
Reddit was originally just one guy with 100s of accounts. The epitome of writing for oneself.
> upvotes were intended to be used for comments that contributed to the discussion.
Intent is established by he who acts, not he who observes. It fundamentally cannot be any other way. The intent of an upvote is down to whatever he who pressed the button intended. That was case from conception of said feature, and will always remain the case. Attempting to project what you might have intended had you been the one who acted onto another party is illogical.
> The contents of those comments should not be purely self-satisfying, though.
Unless, perhaps, you are receiving a commission with detailed requirements, there is really no way to know what someone else will find satisfying. All you can do is write for yourself. If someone else also finds enjoyment in what you created, wonderful, but if not, who cares? That's their problem. And if you did receive a commission to write for another, well, you'd expect payment. Who among us is being paid to write comments?
But what if the AI is used to build up a(n otherwise) genuine human response, like: 'Perhaps the reason behind this is such-and-such, (a quick google)|($AI) suggests that indeed it is common for blah to be blah, so...'
Same logic still applies. If I gave a shit what it "thought" or suggests, I'd prompt the $AI in question, not HN users.
That said, I'm not against a monthly (or whatever regular periodic interval that the community agrees on) thread that discusses the subject, akin to "megathreads" on reddit. Like interesting prompts, or interesting results or cataloguing changes over time etc etc.
It's one of those things that can be useful to discuss in aggregate, but separated out into individual posts just feels like low effort spam to farm upvotes/karma on the back of the flavor of the month. Much in the same way that there's definitely value in the "Who's Hiring/Trying to get hired" monthly threads, but that value/interest drops precipitously if each comment/thread within them were each their own individual submission.
While true, many times people don't want to do this because they are lazy. If they just instead opened up chatgpt they could have instantly gotten their answer. It results in a waste of everyone's time.
If you asked someone how to make French fries and they replied with a map-pin-drop on the nearest McDonald's, would you feel satisfied with the answer?
We should at least consider that maybe they asked how to make French fries because they actually want to learn how to make them themselves. I'll admit the XY problem is real, and people sometimes fail to ask for what they actually want, but we should, as a rule, give them the benefit of the doubt instead of just assuming that we're smarter than them.
This might be a case of just different standards for communication here. One person might want the absolute facts and assumes everyone posting should do their due diligence to verify everything they say, but others are okay with just shooting the shit (to varying degrees).
Great now we've wasted time & material resources for a possibly wrong and hallucinated answer. What part of this is beneficial to anyone?
Frankly, it's a skill thing.
You know how some people can hardly find the back of their own hands if they googled them?
And then there's people (like eg. experienced wikipedians doing research) who have google-fu and can find accurate information about the weirdest things in the amount of time it takes you to tie your shoes and get your hat on.
Now watch how someone like THAT uses chatgpt (or some better LLM) . It's very different from just prompting with a question. Often it involves delegating search tasks to the LLM (and opening 5 google tabs alongside besides) . And they get really interesting results!
Ideally we would require people who ask questions to say what they've researched so far, and where they got stuck. Then low-effort LLM or search engine result pages wouldn't be such a reasonable answer.
I'm not so sure they actually believe the results are authoritative, I think they're being lazy and hoping you will believe it.
To introspect a bit, I think the rote regurgitation aspect is the lesser component. It's just rude in a conventional way that isn't as threatening. It's the implied truth/authority of the Great Oracular Machine which feels more-dangerous and disgusting.
It’s clumsy and has the opposite result most of the time, but people still do it for all manner of trends.
> 1. If I wanted to run a web search, I would have done so
Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.
2. People behave as if they believe AI results are authoritative, which they are not
AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.
Any strict rule/ban would be very premature and shortsighted at this point.