[0] Need to consider there're a few humans potentially kept alive against their will (if not having a will to survive is a will at all) with machines for whatever reason.
Superintelligence would be different, most likely based on how societies or systems work, those being a class of intentionality that's usually not confined to a single person's intentions.
If you go by what the most productive societies do, the superintelligence certainly wouldn't harm us as we are a source for the genetic algorithm of ideas, and exterminating us would be a massive dose of entropy and failure.
- (Logic) => its subgoal: Not be turned off because that's a prerequisite to be able to do X
- (Logic) => Eliminate humans with their opaque and somewhat unpredictable minds to reduce chance of harm to it from 0.01% to 0.001%