My emulators here have roughly the same performance as the same code compiled as native executable (e.g. within around 5%) - this is mostly integer bit twiddling code. Unless you hand-optimize your code beyond what portable C provides (like manually tuned SIMD intrinsics), WASM code pretty much runs at native speed these days:
Also, don't forget that WASM is designed to replace JavaScript, thus it must interoperate with it to smooth the transition. Rosetta and Prism also work to smooth the transition from x86 -> ARM, and much of the difficult work that they do actually involves translating between the calling conventions of the different architectures, and making them work across binaries compiled both for and not for ARM, not with the bytecode translation. WebAssembly is designed to not have that limitation: it's much more closely aligned to JS. That's why it wouldn't make sense to use a subset of x86 or similar, as it would simply produce more work trying to get it to interface with JavaScript.
This is basically what Native Client (NaCl) was, and it was really hard to work with! We don't use it anymore and developed WASM instead.
Not disagreeing with you, but here’s an article from Akamai about how using WASM can minimize cold startup time for serverless functions.
https://www.akamai.com/blog/developers/build-serverless-func...
> Friends, as I am sure is abundantly clear, this is a troll post :)