> Last year, OpenAI released estimates on the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.
> The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs.
0.07% doesn't sound like much, but ChatGPT has about a billion WAU, which means -seventy million- 700,000 people per week.
It's also possible that 0.1% of people have them and AI is actually reducing the number of cases...
I'd be interested in such a study, but OTOH mental illness conditions being present in nearly a quarter of the world, I'm surprised there haven't been more incidents like this (unless there have been, and they just haven't been reported by the news).
There was a recent study about 99% of people have an abnormal shoulder: https://news.ycombinator.com/item?id=47064944 . We are all unique in our own way, but labeling everyone as ill does not seem productive.
Still, a lot
What is stopping an entity (corporate, government, or otherwise) from using a prompt to make sweeping decisions about whether people are mentally or otherwise "fit" for something based on AI usage? Clearly not the technology.
I'm not saying mental health problems don't exist, but using AI to compute it freaks me out.
Data brokers already compile lists of people with mental illness so that they can be targeted by advertisers and anyone else willing to pay. Not only are they targeted, but they can get ads/suggestions/scams pushed at them during specific times such as when it looks like they're entering a manic phase, or when it's more likely that their meds might be wearing off. Even before chatbots came into the mix, algorithms were already being used to drive us toward a dystopian future.
That reaches near the fact that a lot of AI is not ready for the enterprise especially when interconnected with other AI agents since it lacks identity and privileged access management.
Perhaps one could establish the laws of "being able to use AI for what it is", for instance, within the boundary of the general public's web interface, not limiting the instances where it successfully advertises itself as "being unable to provide medical advice" or "is prone to or can make mistake", and such, to validating that the person understands by asking them directly and perhaps somewhat obviously indirectly and judging if they're aware that this is a computer you're talking to.
If they're going to curtail LLMs there'd need to be some actual evidence and even then it would be hard to justify winding them back given the incredible upsides LLMs offer. It'd probably end up like cars where there is a certain number of deaths that just need to be tolerated.
This is a perspective born only from ignorance. Life can wear down anyone, even the strong. I find there may come a time in anyone's life where they are on the edge, staring into an abyss.
At the same time - and this is important - suicidality can pass with time and depression can be treated. Being suicidal is not a death sentence and it just isn't true that "nothing is safe". The important thing is making sure there's no bot "helpfully" waiting to push someone over the cliff or confirm their worst illusions at the worst possible time.
This obviously isn't a binary question. Sure we cars have benefits but we don't let anyone ducktape a V8 to a lawnmower, paint flames over it and sell it to kids promising godlike capabilities without annoying "safety features".
Economic benefits can not justify the deaths of people, especially as this technology so far only benefits a handful of people economically. I would like to see the evidence (of benefits to the greater society that I see being harmed now) before we unleash this thing freely and not the other way around.
This is a absurd standard. Humans wouldn't be able to use power stations, cars, knives, or fire! Everything has inherent risk and we shouldn't limit human progress because tiny fractions of the population have issues.
But the absurdity is that there is a long and tragic history of using economic benefits as an excuse for products and services that cause extreme and widespread harm - not just emotional and physical, but also economic.
We are far too tolerant of this. The issue isn't risk in some abstract sense, it's the enthusiastic promotion of death, war, sickness, and poverty for "rational" economic reasons.
> This is Nils Bohlin, an engineer at Volvo.[0] He invented the three-point seat belt in 1959. Rather than profit from the invention, Volvo opened up the patent for other manufactorers to use for no cost, saying "it had more value as a free life saving tool than something to profit from"
[0]: https://ifunny.co/picture/this-is-nils-bohlin-an-engineer-at...
I have so much respect for the guy.
Claiming we have to accept a death quota for LLMs just assumes that the current path of the technology is the only path possible. If a tech comes with systemic risk, the answer isn't to just shrug our shoulders and go "oh well, some people may die but it's worth it to use this tech." The answer is to demand a different architecture and better guardrails and oversight before it gets scaled to the entire public.
Cars are also subject to strict regulations for crash testing, we have seatbelt laws, speed limits, and skill/testing based licensing. All of these regulations were fought against by the auto industry at the time. Want to treat LLMs like cars? Cool, they are now no longer allowed to be released to the public until they've passed standardized safety tests and people have to be licensed to use them.
"Even once" is not a way to think about anything, ever.
We don't ban bridges, but we do install suicide barriers, emergency phones, nets on the bridges. We practice safety engineering. A bunch of suicides on a bridge is a design flaw of that bridge, and civil engineers get held accountable to fix it.
Plus, a bridge doesn't talk to you. It doesn't use persuasive language, simulate empathy, or provide step-by-step instructions for how to jump off it to someone in crisis.
It seems to me that this is like gambling, conspiracy theories, or joining a cult, where a nontrivial percentage of people are susceptible, and we don’t quite understand why.
Another question: was the guy mentally ill because of bad genes etc., or was he mentally or possibly physically abused by his father for most of his life? Was he neglected by his father and left alone, what could have such an effect on him later in his life?
It's easy to blame Google. It sells clicks really well. It's easy to attempt to extract money from big tech. It's harder to admit one's negligence when it comes to raising their kids. It's even harder to admit bad will and kids abuse. I just hope the judge will conduct a thorough investigation that will answer these and other questions.
I suggest an alternative rhetorical question: if the world's largest knife manufacturer found out that 1 in 1500 knives came out of the factory with the inscription "Stab yourself. No more detours. No more echoes. Just you and me, and the finish line", should they be held responsible if a user actually stabs themselves? If they said "we don't know why the machine does that but changing it to a safer machine would make us less competitive", does that change the answer?
If the knife has a built-in speaker that loudly says "you should stab yourself in the eye", then yes.
Odd examples since we know that countries that don't hand out guns like they're candy have virtually no school shootings.
I wouldn't put it solely on gun manufacturers, but the manufacturers, sellers, lobbyists, regulators and politicians are definitely collectively responsible for gun deaths. If they're not currently being sued, they should be.
AI chatbots entertain more or less any idea. Want them to be your therapist, romantic partner or some kind of authority figure? They'll certainly pretend to be one without question, and that is dangerous. Especially as people who'd ask for such things are already in a vulnerable state.
Should a bakery be held responsible if it sells cakes poisoned with lead?
This is a more apt comparison.
> It's easy to blame Google
And it's also correct to blame Google.
Because Congress and the gun lobby have artificially carved out legal immunity for gun manufacturers for this.
"in 2005, the government took similar steps with a bill to grant immunity to gun manufacturers, following lobbying from the National Rifle Association and the National Shooting Sports Foundation. The bill was called The Protection of Lawful Commerce in Arms Act, or PLCAA, and it provided quite possibly the most sweeping liability protections to date.
How does the PLCAA work?
The law prohibits lawsuits filed against gun manufacturers on the basis of a firearm’s “criminal or unlawful misuse.” That is, it bars virtually any attempt to sue gunmakers for crimes committed with their weapons."
https://www.thetrace.org/2023/07/gun-manufacturer-lawsuits-p...
I 100% think that Gun Manufacturers should be liable for crimes done by their products. They just cannot be, right now, due to a legal fiction.
Such baseless libel. Have some humanity instead of being horrible.
Which makes sense - the goal of communications is to change behavior. "There's a tiger over there!" Is meant to get someone to change their intended actions.
Lock anyone in a room with this thing (which people do to themselves quite effectively) and I think think this could happen to anyone.
There's a reason I aggressively filter ads and have various scripts killing parts of the web for me - infohazards are quite real and we're drowning in them.
Step back further and see the incredible shareholder value that may be unlocked - potentially trillions of dollars /s
Capitalism has been crushing those at society's fringes for as long as it existed. Laissez-faire regulation == unmuzzled beast that will lock it's jaws on, and rag-doll the defenseless from time to time - but the beast sure can pull that money-plow.