upvote
I would be very careful doing this
reply
You always have to be careful with LLMs, but to be fair, I felt like Claude is such a good therapist, at least it is good to start with if you want to unpack yourself. I have been to 3 short human therapist sessions in my life, and I only felt some kind of genuine self-improvement and progress with Claude.
reply
And how do you draw the line between feeling progress and actually making progress?
reply
Counter-point: I often raise the same question of people with human therapists. I do not get strong responses.
reply
The same way you distinguish between feeling like having a problem and actually having a problem.
reply
This is needlessly flippant and not really the same thing. Determining progress in a therapy setting is usually a collaborative effort between the therapist and the client. An LLM is not a reliable agent to make that determination.
reply
> Determining progress in a therapy setting is usually a collaborative effort between the therapist and the client. An LLM is not a reliable agent to make that determination

Can anyone describe how to determine how a (professional, human) therapist is "a reliable agent" to make such a determination?

reply
If you want to call into question the entire field of behavioral health and the training that is involved then that is fine, but if that’s how you feel then this entire discussion is really about something different and I can’t bridge the gap here.
reply
I didn’t claim that an LLM is that, and I fully agree that it is not. I’m saying that one is inherently one’s own judge of whether one has a problem. You go to a therapist when you feel you have a problem that warrants it. You stop going when you feel you don’t have it anymore. And OP is very likely assessing their progress in the same way. I wasn’t being flippant if the parent was asking a genuine question.
reply
> I’m saying that one is inherently one’s own judge of whether one has a problem. You go to a therapist when you feel you have a problem that warrants it

That is for certain types of therapy/clinical care. It is not always - and often isn’t - the case. Plenty of diagnoses and care protocols are not a matter of opinion or based on “you feeling there’s an issue” or deciding on your own there is no longer an issue.

reply
You can't be careful at all doing this, this is like smoking a cigarette in a dynamite factory.

Using LLMs for therapy is so deeply dystopian and disgusting, people need human empathy for therapy. LLMs do not emit empathy.

Complete disaster waiting to happen for that individual.

reply
My experience is that it tries to look at your situation in an objective way, and tries to help you to analyse your thoughts and actions. It comes across as very empathetic though, so there can lie a danger if you are easily persuaded into seeing it as a friend.
reply
>in an objective way

One of the great myths of models in countless fields/industries. LLM’s are absolutely in no way objective.

Now if you want to say it’s an “outside opinion“ that’s valid. But do not kid yourself into thinking it is somehow empirical or objective

reply
It doesn't try to do anything. It doesn't work like that. It regurgitates the most likely tokens found in the training set.
reply
Hmmmm i didn't know that... so a machine is not human is your point? Look, i know it doesn't try, just like a sorting algo does not try to sort, or an article does not try to convey an opinion and a law does not try to make society more organized.
reply
That is so reductive of an analysis that it is almost worthless. Technically true, but very unhelpful in terms of using an LLM.

It is a first principle though so it helps to “stir the context windows pot” by having it pull in research and other shit on the web that will help ground it and not just tell you exactly what you prompt it to say.

reply
They are amazing tools, but when people try to give them agency someone has to explain it in simple terms.
reply
Claudes have lots of empathy. The issue is the opposite - it isn't very good at challenging you and it's not capable of independently verifying you're not bullshitting it or lying about your own situation.

But it's better than talking to yourself or an abuser!

reply
It's about the same as talking to yourself, LLMs simply agree with anything you say unless it is directly harmful. Definitely agree about talking to an abuser, though.

Sometimes people indeed just need validation and it helps them a lot, in that case LLMs can work. Alternatively, I assume some people just put the whole situation into words and that alone helps.

But if someone needs something else, they can be straight up dangerous.

reply
> It's about the same as talking to yourself, LLMs simply agree with anything you say unless it is directly harmful.

They have world knowledge and are capable of explaining things and doing web searches. That's enough to help. I mean, sometimes people just need answers to questions.

reply
> It's about the same as talking to yourself

In one way it's potentially worse than talking to yourself. Some part of you might recognize that you need to talk to someone other than yourself; an LLM might make you feel like you've done that, while reinforcing whatever you think rather than breaking you out of patterns.

Also, LLMs can have more resources and do some "creative" enabling of a person stuck in a loop, so if you are thinking dangerous things but lack the wherewithal to put them into action, an LLM could make you more dangerous (to yourself or to others).

reply
Using an LLM for therapy is like using an iPad as an all-purpose child attention pacifier. Sure, it’s convenient. Sure there’s no immediate harm. Why a stressed parent would be attracted to the idea is obvious… and of course it’s a terrible idea.
reply
Don’t call them therapy sessions. They kind of look like it but ultimately these are smoke blowing machines, which is very far from what a therapist would do.
reply
Six decades later and we're still trying to explain to people the same things[1]:

> Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."

[1]: https://en.wikipedia.org/wiki/ELIZA

reply