upvote
One thing that can help, according to what I've seen, is not to tell the AI that it's something that you wrote. Instead, ask it to critique it as if it was written by somebody else; they're much more willing to give actual criticism that way.
reply
In ChatGPT at least you can choose "Efficient" as the base style/tone and "Straight shooting" for custom instructions. And this seems to eliminate a lot of the fluff. I no longer get those cloyingly sweet outputs that play to my ego in cringey vernacular. Although it still won't go as far as criticizing my thoughts or ideas unless I explicitly ask it to (humans will happily do this without prompting. lol)
reply
I am going to try the straight shooting custom instruction. I have already extensively told chatgpt to stop being so 'fluffy' over the past few years that I think it has stopped doing it, but I catch it sometimes still. I hope this helps it cease and desist with that inane conversation bs.

GPT edit of my above message for my own giggles: Command:make this a good comment for hackernews (ycombinator) <above message> Resulting comment for hn: I'm excited to try out the straight-shooting custom instruction. Over the past few years, I've been telling ChatGPT to stop being so "fluffy," and while it's improved, it sometimes still slips. Hoping this new approach finally eliminates the inane conversational filler.

reply
Personally, I only find LLMs annoying and unpleasant to converse with. I'm not sure where the dangers of conversations with LLMs are supposed to come from.
reply
I'm the same way. Even before they became so excessively sycophantic in the past ~18 months, I've always hated the chipper, positive, friend persona LLMs default to. Perhaps this inoculates me somewhat from their manipulative effects. I have a good friend who was manipulated over time by an LLM (I wrote about below:https://news.ycombinator.com/item?id=46208463).
reply
Imagine a lonely person desperate for conversation. A child feeling neglected by their parents. A spouse, unable to talk about their passions with their partner.

The LLM can be that conversational partner. It will just as happily talk about the nuances of 18th century Scotland, or the latest clash of clans update. No topic is beneath it and it never gets annoyed by your “weird“ questions.

Likewise, for people suffering from delusions. Depending on its “mood” it will happily engage in conversations about how the FBI, CIA, KGB, may be after you. Or that your friends are secretly spying for Mossad or the local police.

It pretends to care and have a conscience, but it doesn’t. Humans react to “weird“ for a reason the LLM lacks that evolutionary safety mechanism. It cannot tell when it is going off the rails. At least not in the moment.

There is a reason that LLM’s are excellent at role-play. Because that’s what they’re doing all of the time. ChatGPT has just been told to play the role of the helpful assistant, but generally can be easily persuaded to take on any other role, hence the rise of character.ai and similar sites.

reply
[dead]
reply
Asking an AI for opinion versus something concrete (like code, some writing, or suggestions) seems like a crucial difference. I've experimented with crossing that line, but I've always recognized the agency I'd be losing if I did, because it essentially requires a leap of faith, and I don't (and might never) have trust in the objectivity of LLMs.

It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.

reply
> LLMs activate human social wiring and have a powerful effect

Is this generally true, or is there a subset of people that are particularly susceptible?

It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.

I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).

reply
I don't think you'd say to someone "please subtly flatter me, I want to know how it feels".

But that's sort of what this is, except it's not even coming from a real person. It's subtle enough that it can be easy not to notice, but still motivate you in a direction that doesn't reflect reality.

reply
> But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.

https://xkcd.com/386/

reply
Most stuff posted here are mostly ignored, though. If grounding to reality requires one to become viral first, we are cooked.
reply
HN frontpage hardly requires being viral.

Just genuine intrigue from a select few.

reply
Did you ever visit `https://news.ycombinator.com/newest` page? Like 99% of submitted topics are never seen by anyone but few wanderers.
reply
I prefer the "New" page. Much more random.
reply
Often.

95%+ of submitted topics have poorly formatted titles, are submitted at off-peak times where there’s less users of demographics who might upvote,

and if your Show HN isn’t as widely applicable as this, those things might be important to think about.

Fairness aside, of course.

reply
> HN frontpage hardly requires virility.

As far as I can tell, it doesn't require femininity either.

I'm guessing you meant "virality"

reply
Sure did, thanks.
reply
It’s still way easier the first time.

The 50th time someone comes to the same conclusion nobody on HN is going to upvote the topic.

reply
This wasn't a technical subject, and unrelated to HN. Just edited my post to clarify - thanks!
reply