Hundreds of years ago, it was not unusual to publish an encrypted solution of some mathematical problem, in order to establish priority without disclosing the algorithm that was used.
Of course, at that time very simple encryption methods were used, for instance an anagram of the solution was published (i.e. encryption by letter transposition).
"God doesn't exist" is essentially incoherent. God is the perfect being, and if he didn't exist, he wouldn't be perfect.
I think the logical mistake is obvious.
People want AI to be able to do every good thing but no bad thing, which is impossible twice. First because false positives and false negatives trade against each other, so a general purpose AI which can do anything approximating all the good things is going to have the bias leaning heavily towards being able to do things in general and therefore being able to do many things that are bad. And second because "good" and "bad" aren't things that anybody can agree on and then some people will demand that it must do X while others demand that it not do X (e.g. "help the rebels win the war"), which means someone is inherently going to be unsatisfied and it's not a thing that can be sensibly regarded as everyone working towards a common goal.