upvote
This has been used for centuries. It is not a new invention.

Hundreds of years ago, it was not unusual to publish an encrypted solution of some mathematical problem, in order to establish priority without disclosing the algorithm that was used.

Of course, at that time very simple encryption methods were used, for instance an anagram of the solution was published (i.e. encryption by letter transposition).

reply
Is it? Nobody else can really build on their work.
reply
AIU the intent of this publication is not to further research but to make it clear to anyone that we need to move to post quantum cryptography ASAP.
reply
But the algorithm still isn't practical on existing quantum computers, or ones that are going to be around any time soon, so there's no reason not to publish in full.
reply
If only AI safety research had a mechanism this clear. "We have proof that building the machine will kill everybody, so get to work making a provably safe version."
reply
"AI safety" is essentially incoherent. It's like trying to build an all-purpose chemistry lab that can't produce explosives.
reply
Neat, an ontological argument against AI safety. Similar argument:

"God doesn't exist" is essentially incoherent. God is the perfect being, and if he didn't exist, he wouldn't be perfect.

I think the logical mistake is obvious.

reply
Except that you have the logic backwards. It's an argument that something ("safe" general purpose AI) can't exist rather than that it has to.

People want AI to be able to do every good thing but no bad thing, which is impossible twice. First because false positives and false negatives trade against each other, so a general purpose AI which can do anything approximating all the good things is going to have the bias leaning heavily towards being able to do things in general and therefore being able to do many things that are bad. And second because "good" and "bad" aren't things that anybody can agree on and then some people will demand that it must do X while others demand that it not do X (e.g. "help the rebels win the war"), which means someone is inherently going to be unsatisfied and it's not a thing that can be sensibly regarded as everyone working towards a common goal.

reply
You've made a great argument for calling a general halt to AI development, but I'm not sure that was your intent.
reply
Could be one of the intents, but the main intent is reputation building.
reply
That may be the intent, but it is very anti-science.
reply