upvote
> "I ran a $searchengine search and here is the most relevant result."

Except it's "...and here is the first result it gave me, I didn't bother looking further".

reply
> 2. People behave as if they believe AI results are authoritative, which they are not

Web search has the same issue. If you don't validate it, you wind up in the same problem.

reply
> people are demonstrating a new behavior that is disrupting social norms

The social norm has always been that you write comments on the internet for yourself, not others. Nothing really changes if you now find enjoyment in adding AI output to your work. Whatever floats your boat, as they say.

reply
The issue isn't people posting AI generated comments on the Internet as a whole, it's whether it should be allowed in this space. Part of the reason I come to HN is the quality of comments are pretty good relative to other places online. I think it's a legitimate question whether AI comments would help or hinder discussion here.
reply
That's a pretty good sign that the HN user base as a rule finds most enjoyment in writing high quality content for themselves. All questions are legitimate, but in this circumstance what reason is there to believe that they would find even more enjoyment from reducing the quality?

It seems a lot like code. You can "vibe code" your way into an ungodly mess, but those who used to enjoy the craft of writing high quality code before LLMs arrived still seem to insist on high quality code even if an LLM is helping produce it now. It is highly likely that internet comments are no different. Those who value quality will continue to. Those who want garbage will produce it, AI or not.

Much more likely is seeing the user base shift over time towards users that don't care about quality. Many a forum have seen that happen long before LLMs were a thing, and it is likely to happen to forums again in the future. But, the comments aren't written for you (except your own, of course) anyway, so... It is not rational to want to control what others are writing for themselves. But you can be responsible for writing for yourself what you want to see!

reply
Would you object to high quality AI comments?
reply
that's an oxymoron
reply
Has it? More than one forum has expected that commentary should contribute to the discussion. Reddit is the most prominent example, where originally upvotes were intended to be used for comments that contributed to the discussion. It's not the first or only example, however.

Sure, the motivation for many people to write comments is to satisfy themselves. The contents of those comments should not be purely self-satisfying, though.

reply
> Reddit is the most prominent example

Reddit was originally just one guy with 100s of accounts. The epitome of writing for oneself.

> upvotes were intended to be used for comments that contributed to the discussion.

Intent is established by he who acts, not he who observes. It fundamentally cannot be any other way. The intent of an upvote is down to whatever he who pressed the button intended. That was case from conception of said feature, and will always remain the case. Attempting to project what you might have intended had you been the one who acted onto another party is illogical.

> The contents of those comments should not be purely self-satisfying, though.

Unless, perhaps, you are receiving a commission with detailed requirements, there is really no way to know what someone else will find satisfying. All you can do is write for yourself. If someone else also finds enjoyment in what you created, wonderful, but if not, who cares? That's their problem. And if you did receive a commission to write for another, well, you'd expect payment. Who among us is being paid to write comments?

reply
I think it's closer in proximity to the "glasshole" trend, where there social pressure actually worked to make people feel less comfortable about using it publicly. This is an entirely vibes based judgement, but presenting unaltered ai speech within your own feels more imposing and authoritative(as wagging around an potentially-on camera did then). This being the norm on other platforms has degraded my willingness to engage with potentially infinite and meaningless streams of bloviation rather than the (usually) concise and engaging writings of humans
reply
Totally agree if the AI or search results are a (relatively) direct answer to the question.

But what if the AI is used to build up a(n otherwise) genuine human response, like: 'Perhaps the reason behind this is such-and-such, (a quick google)|($AI) suggests that indeed it is common for blah to be blah, so...'

reply
> ($AI) suggests

Same logic still applies. If I gave a shit what it "thought" or suggests, I'd prompt the $AI in question, not HN users.

That said, I'm not against a monthly (or whatever regular periodic interval that the community agrees on) thread that discusses the subject, akin to "megathreads" on reddit. Like interesting prompts, or interesting results or cataloguing changes over time etc etc.

It's one of those things that can be useful to discuss in aggregate, but separated out into individual posts just feels like low effort spam to farm upvotes/karma on the back of the flavor of the month. Much in the same way that there's definitely value in the "Who's Hiring/Trying to get hired" monthly threads, but that value/interest drops precipitously if each comment/thread within them were each their own individual submission.

reply
>If I wanted to run a web search, I would have done so

While true, many times people don't want to do this because they are lazy. If they just instead opened up chatgpt they could have instantly gotten their answer. It results in a waste of everyone's time.

reply
This begs the question. You are assuming they wanted an LLM generated response, but were to lazy to generate one. Isn't it more likely that the reason they didn't use an LLM is that they didn't want an LLM response, so giving them one is...sort of clueless?

If you asked someone how to make French fries and they replied with a map-pin-drop on the nearest McDonald's, would you feel satisfied with the answer?

reply
It's more like someone asks if there are McDonald's in San Francisco, and then someone else searches "mcdonald's san francisco" on Google Maps and then replies with the result. It would have been faster for the person to just type their question elsewhere and get the result back immediately instead of someone else doing it for them.
reply
Right. If someone asks "What does ChatGPT think about ...", I'd fully agree that they're being lazy. But if that's _not_ what they ask, we shouldn't assume that that's what they meant.

We should at least consider that maybe they asked how to make French fries because they actually want to learn how to make them themselves. I'll admit the XY problem is real, and people sometimes fail to ask for what they actually want, but we should, as a rule, give them the benefit of the doubt instead of just assuming that we're smarter than them.

reply
Such open ended questions are not the kind of questions I'm referring to.
reply
I think a lot of times, people are here just to have a conversation. I wouldn't go so far as to say someone who is pontificating and could have done a web search to verify their thoughts and opinions is being lazy.

This might be a case of just different standards for communication here. One person might want the absolute facts and assumes everyone posting should do their due diligence to verify everything they say, but others are okay with just shooting the shit (to varying degrees).

reply
I've seen this happen too. People will comment and say in the comment that they can't remember something when they could have easily refound that information with chatgpt or google.
reply
> If they just instead opened up chatgpt they could have instantly gotten their answer.

Great now we've wasted time & material resources for a possibly wrong and hallucinated answer. What part of this is beneficial to anyone?

reply
Counterpoint:

Frankly, it's a skill thing.

You know how some people can hardly find the back of their own hands if they googled them?

And then there's people (like eg. experienced wikipedians doing research) who have google-fu and can find accurate information about the weirdest things in the amount of time it takes you to tie your shoes and get your hat on.

Now watch how someone like THAT uses chatgpt (or some better LLM) . It's very different from just prompting with a question. Often it involves delegating search tasks to the LLM (and opening 5 google tabs alongside besides) . And they get really interesting results!

reply
Well put. There are two sides of the coin: the lazy questioner who expects others to do the work researching what they would not, and the lazy/indulgent answerer who basically LMGTFY's it.

Ideally we would require people who ask questions to say what they've researched so far, and where they got stuck. Then low-effort LLM or search engine result pages wouldn't be such a reasonable answer.

reply
I haven't thought about LMGTFY since stackoverflow. Usually though I see responses with people thrusting forth AI answers that provide more reasoning, back then LMGTFY was more about rote conventions(e.g. "how do you split a string on ," and ai is used more for "what are ways that solar power will change grid dynamics")
reply
> 2. People behave as if they believe AI results are authoritative, which they are not

I'm not so sure they actually believe the results are authoritative, I think they're being lazy and hoping you will believe it.

reply
This is a big of a gravity vs. acceleration issue, in that the end result is indistinguishable.
reply
Agreed on the similar-but-worse comparison to to the laziest possible web-searches of yesteryear.

To introspect a bit, I think the rote regurgitation aspect is the lesser component. It's just rude in a conventional way that isn't as threatening. It's the implied truth/authority of the Great Oracular Machine which feels more-dangerous and disgusting.

reply
There’s also a whole “gosh golly look at me using the latest fad!” demonstration aspect to this. People status signaling that they’re “in”. Thus the Bluetooth earpiece comment.

It’s clumsy and has the opposite result most of the time, but people still do it for all manner of trends.

reply
I think doing your research using search engine/AI/books and paraphrasing your findings is always valuable. And you should cite your resources when you do so, eg. “ChatGPT says that…”

> 1. If I wanted to run a web search, I would have done so

Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.

2. People behave as if they believe AI results are authoritative, which they are not

AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.

Any strict rule/ban would be very premature and shortsighted at this point.

reply