The main thing that put me off on the comment was the outright dismissal of other opinions. That's rarely a recipe for a productive conversation.
>However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I
I don't think it's unique. It's simply a catalyst. In good times with a system that looks out for its people, AI could do great things and accelerate productivity. It could even create jobs. None of that is out of reach, in theory.
But part of understanding the negative sentiment is understanding that we aren't in that high trust society with systems working for the citizen. So any bouts of productivity will only be used to accelerate that distrust. Looking at the marketing of AI these past few years confirms this. So why would anyone trust it this time?
Rampant layoffs, vague hand waves of "UBI will help" despite no structures in place for that, more than a dozen high profile kerfuffles that can only be described as a grift that made millions anyway, and persistent lobbying to try and make it illegal to regulate AI. These aren't the actions of people who have the best interests of the public masses in mind. It's modern day robber barons.
>I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.
I don't have a hard line stance on how organizations handle AI. But from my end I hear that Ai has mostly lead to being a stressor on contributors trying to weed out the flood of low quality submissions. Ai or not (again, Ai is a catalyst. Not the root cause), that's a problem for what's ultimately a volunteer position that requires highly specialized skills.
If the choice comes between banning Ai submissions, restricting submissions altogether with a different system, or burning out talent trying to review all this slop: I don't think most orgs will choose the latter.