It's also possible that 0.1% of people have them and AI is actually reducing the number of cases...
I'd be interested in such a study, but OTOH mental illness conditions being present in nearly a quarter of the world, I'm surprised there haven't been more incidents like this (unless there have been, and they just haven't been reported by the news).
There was a recent study about 99% of people have an abnormal shoulder: https://news.ycombinator.com/item?id=47064944 . We are all unique in our own way, but labeling everyone as ill does not seem productive.
Still, a lot
What is stopping an entity (corporate, government, or otherwise) from using a prompt to make sweeping decisions about whether people are mentally or otherwise "fit" for something based on AI usage? Clearly not the technology.
I'm not saying mental health problems don't exist, but using AI to compute it freaks me out.
Data brokers already compile lists of people with mental illness so that they can be targeted by advertisers and anyone else willing to pay. Not only are they targeted, but they can get ads/suggestions/scams pushed at them during specific times such as when it looks like they're entering a manic phase, or when it's more likely that their meds might be wearing off. Even before chatbots came into the mix, algorithms were already being used to drive us toward a dystopian future.