If model provider believes they have a better model, it can be a viable bet. But many (me included) started experimenting with other providers because of enshittification from Anthropic (price + uptime). Only to find, that Codex is not that worse in quality for a significantly more output per $.
Not necessarily with speculative decoding. Whitespace would be trivial to predict and they would petty much keep using the same amount of compute as before.
I don't think that's their primary motive for doing this but it is a side effect.