The point is to recognise that certain patterns has a cost in the form of risks, and that cost can be massively outsize of the benefits.
Just as the risk of giving a poorly vetted employee unfettered access to the company vault.
In the case of employees, businesses invest a tremendous amount of money in mitigating the insider risks. Nobody is saying you should take no risks with AI, but that you should be aware of how serious the risks are, and how to mitigate them or manage them in other ways.
Exactly as we do with employees.
Who are you going to arrest and/or sue when you run a chat bot "at your own risk" and it shoots you in the foot?
This is the calculus that large companies use all the time when committing acts that are 'most likely' illegal. While they may be fined million of dollars they at least believe they'll make 10s to 100s of millions on said action.
Now, for you as an individual things are far more risky.
You don't have a nest of heathen lawyers to keep you out of trouble.
You can't bully nation states, government entities, or even other large companies.
You individually may be held civilly or criminally liable if things go bad enough.