As for overflow, the reality is that most compilers simply assume it won't happen at this point. They do this because the consumers want it because it simply generates far faster code being able to assume that it won't happen. Yes, people often come with pathological examples to show why this is a bad idea of ridiculous optimizations being made no one expects because compilers assume it won't ever happen, but those are pathological, in practice it really comes down to loops. In many loops, compilers having to assume that loop variables can overflow in theory disables all sorts of optimizations and elisions and in practice they won't overflow and if they overflow that's an unintended bug anyway.
Obviously a a very basic example is a loop adding some counter value to a counter and stopping when the counter is past a certain value. Assuming that integers can overflow, and that thus adding a value can make the counter less than what it used to be in theory obviously disables many optimizations in streamlining the logic. Just in general, assuming overflow can't occur means being able to make the assumption that adding a positive integer to another integer will always produce a larger integer than the original, that is a very powerful assumption for optimizations to be able to make obviously, assuming that overflow can happen removes it that's why it's undefined behavior. Compilers are free to assume it will never happen.
There other thing is the ratio of processing power vs memory size is very high for embedded machines. You have processors that can hold their own against a 486 but only have 16k of RAM. And the marginal cost of performance is low. A lot of devices spend most of their time doing utterly nothing.