Its a little bit hypocritic which often enough ends in realism aka "okay we clearly can't fight their copyright infridgments because they are too powerful and too rich but at least we can use the good side of it".
Nothing btw. enforces all of this to happen THAT fast besides capitalism. We could slow down, we could do it better or more right.
What LLMs are NOT is intelligent in the same way as a human, which is to say they are not "AGI". They may be loosely AGI-equivalent for certain tasks, software development being the poster child. LLMs have no equivalent of "judgement", and they lie ("hallucinate") with impunity if they don't know the answer. Even with coding, they'll often do the wrong thing, such as writing tests that don't test anything.
It seems likely that LLMs will be one component of a truly conscious AI (AGI+), in the same way our subconscious facility to form sentences is part of our intelligence. We'll see how quickly the other pieces arrive, if ever.
Now, I don't believe AI will ever amount to enough to be a critical threat to human life, you know, beyond the immense amounts of wasted energy they propose to convert into something more useful, like a market crash or heat and noise, or both.
Not sure how you can call someone opposed to any of that "anti-civilisational" matter-of-factly.