JS required the time and effort because it's a clown-car nightmare of a design from top to bottom. How many person-hours and CPU cycles were spent on papering over and fixing things that never should have existed in the first place?
This doesn't even count as a sunk cost fallacy, because the cost is still being paid by everyone who can't even get upgraded to the current "better" version of everything.
The sooner JavaScript falls out of favor the better.
By the same token, was Java or Flash more dangerous than JS? On paper, no - all the same, just three virtual machines. But having all three in a browser made things fun back in the early 2000s.
WASM today has no access to anything that isn't given to it from JS. That means that the only possible places to exploit are bugs in the JIT, something that exists as well for JavaScript.
Even WASM gets bindings to the DOM, it's surface area is still smaller as Javascript has access to a bunch more APIs that aren't the DOM. For example, WebUSB.
And even if WASM gets feature parity with Javascript, it will only be as dangerous as Javascript itself. The main actual risk for WASM would be the host language having memory safety bugs (such as C++).
So why was Java and Flash dangerous in the browser (and activex, NaCL).
The answer is quite simple. Those VMs had dangerous components in them. Both Java and Flash had the ability to reach out and scribble on a random dll in the operating system or to upload a random file from the user folder. Java relied HEAVILY on the security manager stopping you from doing that, IDK what flash used. Javascript has no such capability (well, at least it didn't when flash and Java were in the browser, IDK about now). For Java, you were running in a full JVM which means a single exploit gave you the power to do whatever the JVM was capable of doing. For Javascript, an exploit on Javascript still bound you to the javascript sandbox. That mostly meant that you might expose information for the current webpage.
Taking this argument to its extreme, does this mean that introducing new technology always decreases technology? Because even if the technology would be more secure, just the fact that it's new makes it less secure in your mind, so then the only favorable move is to never adopt anything new?
Supposedly you have to be aware of some inherent weakness in WASM to feel like it isn't worth introducing, otherwise shouldn't we try to adopt more safe and secure technologies?
I assume you mean "decreases security" by context. And in that case - purely from a security standpoint - generally speaking the answer is yes. This is why security can often be a PITA when you're trying to adopt new things and innovate, meanwhile by default security wants things that have been demonstrated to work well. It's a known catch-22.