No, it is not better. I have spent $AGE years of my life developing the ability to determine whether someone is authentically providing me sympathy, and when they are, I actually appreciate it. When they aren’t, I realize that that person is probably being mistreated by some corporate monstrosity or they’re having a shit day, and I provide them benefit of the doubt.
> At least the computer isn’t being forced to lie to me.
Isn’t it though?
> We need a law that forces management to be regularly exposed to their own customer service.
Yeah we need something. I joke about with my friends creating an AI concierge service that deals with these chatbots and alerts you when a human is finally somehow involved in the chain of communication. What a beautiful world where we’ll be burning absurd amounts of carbon in some sort of antisocial AI arms race to try to maximize shareholder profit.
The exceptions are generally when people are scared, and sadly some people are scared all the time.
I haven’t had the pleasure of one of these phone systems yet. I think I’d still be more irritated by a human fake apology because the company is abusing two people for that.
At any rate, I didn’t mean for it to be some sort of contest, more of a lament that modern customer service is a garbage fire in many ways and I dream of forcing the sociopaths who design these systems to suffer their own handiwork.