OK, maybe someone will build a bioweapon that does that for real. :P
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
On your second point, see my response to oceanplexian below: https://news.ycombinator.com/item?id=47189385
We live in a free society. AI should be democratized like any other technology.
There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".
This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.
It's not enough for a handful of people to predict something. You have to get the entire nation onboard to defend against it.
When you only allow gov and big tech access to powerful AI, you create a much more dangerous and unstable world.
Centralizing power is dangerous and leads to power struggles and instability.