It is a bit insulting, but I get that these issues are important and people feel like the stakes are sky-high: job loss, misallocation of resources, enshitification, increased social stratification, abrogation of personal responsibility, runaway corporate irresponsibility, amplification of bad actors, and just maybe that `p(doom)` is way higher than AI-optimists are willing to consider. Especially as AI makes advances into warfare, justice, and surveillance.
Even if you think AI is great, it's easy to acknowledge that all it may take is zealotry and the rot within politics to turn it into a disaster. You're absolutely right to identify that there are some eerie similarities to the "gun's don't kill people, people kill people" line of thinking.
There IS a lot to grapple with. However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I also disagree that AI in any form is our salvation and going to elevate humanity to unfathomable heights (or anything close to that).
But, to bring it back to this specific topic, I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.
I do agree that at large, the theoretical upsides of accessibility are almost certainly completely overshadowed by obvious downsides of AI. At least, for now anyway. Accessibility is a single instance of the general argument that "of course there are major upsides to using AI", and there a good chance the future only gets brighter.
My point, essentially, is that I think this is (yet another) area in life where you can't solve the problem by saying "don't do it", and enforcing it is cost-prohibitive. Saying "no AI!" isn't going to stop PR spam. It's not going to stop slop code. What is it going to stop (see edit)? "Bad" people won't care, and "good" people (who use/depend-on AI) will contribute less.
Thus I think we need to focus on developing robust systems around integrating AI. Certainly I'd love to see people adopt responsible disclosure policies as a starting point.
--
[edit] -- To answer some of my own question, there are obvious legal concerns that frequently come up. I have my opinions, but as in many legal matters, especially around IP, the water is murky and opinions are strongly held at both extremes and all to often having to fight a legal battle at all* is immediately a loss regardless of outcome.
You're literally saying that the upsides of hallucinanigenic gifts are worth the downside of collapsing society. I'd say that that is downplaying and misrepreting the issue. You even go so far to say
>Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.
These aren't balanced arguments taking both sides into considerations. It's a decision that your mindset is the only right one and anyone else is a opposing progress.
No, literally, he didn't.
At least in the US, society has been well on it's way to collapse before the LLM came out. "Fake news" is a great example of this.
>It's a decision that your mindset is the only right one and anyone else is a opposing progress.
So pretty much every religious group that's ever existed for any amount of time. Fundamentalism is totally unproblematic, right?
IMO you can blame this on ML and the ability to microtarget[1] constituencies with propaganda that's been optimized, workshopped, focus grouped, etc to death.
Proto-AI got us there, LLMs are an accelerator in the same direction.
But as modern society is, it is simply accelerating the low trust factors of it and collapsing jobs (even if it can't do them yet), because that's what was already happening. But hey, assets also accelerated up. For now.
>So pretty much every religious group that's ever existed for any amount of time. Fundamentalism is totally unproblematic, right?
Religion is a very interesting factor. I have many thoughts on it, but for now I'll just say that a good 95% of religious devouts utterly fail at following what their relevant scriptures say to do. We can extrapolate the meaning of that in so many ways from there.