AI users should fear verbal abuse and shame.
(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)
But as you pointed, not everything has legal liability. Socially, no, they should face worse consequences. Deciding to let an AI talk for you is malicious carelessness.
This is quite ironic since the entire issue here is how the AI attempted to abuse and shame people.
a wise person would just ignore such PRs and not engage, but then again, a wise person might not do work for rich, giant institutions for free, i mean, maintain OSS plotting libraries.
this is a very interesting conversation actually, i think LLMs satisfy the actual demand that OSS satisfies, which is software that costs nothing, and if you think about that deeply there's all sorts of interesting ways that you could spend less time maintaining libraries for other people to not pay you for them.