Except real life is not a program, and the input data is flawed (human and machines' errors). The acceptance tests are just predictions, based on, again, fallible analyses of the flawed data from history. So many layers of errors that compound
Exactly. The AI just does the math based on the goals you've given it. AI would have happily nuked Hiroshima and Nagasaki because it would have estimated that doing so would save the lives of X number of US soldiers in a land invasion, and given a goal of achieving "unconditional surrender now", it wouldn't have considered that a land invasion wasn't imminently necessary and therefore killing 200,000 civilians wasn't the right moral choice.
Nuclear weapons are war deterrent, not an actual weapon unless used against a country which is not a nuclear country. Using nuclear weapons pretty much guarantees both sides will be wiped down so it most definitely nowhere near logical