Angelic non-determinism seems difficult to use to determine if an optimisation is valid. If I understand this correctly, it is basically the as-if rule, but in this case applied to something that potentially needs global program analysis. Would that be an accurate understanding?
It sounds like both of these proposals will be strictly less able to optimize than strict provenance in rust to me. In particular, Rust allows applying a closure/lambda to map a pointer while keeping the provenance. That avoids exposing the provenance as you add and remove tag bits, which should at least in theory allow LLVM to optimise better. (But this keeps the value as a pointer, and having a dangling pointer that you don't access is fine in Rust, probably not in C?)
I'm not sure why I'm surprised actually, Rust can be a more sensible language in places thanks to hindsight. We see this in being able to use LLVM noalias (restrict basically) in more places thanks to the different aliasing model, while still not having the error prone TBAA of C and C++. And it doesn't need a model of memory consisting of typed objects (rather it is all just bytes and gets materialised into a value of a type on access).