At the top level you have the actual environment, with those meme videos of the robot trotting through a car park, getting kicked off balance, and recovering. The whole point of those tests was to demonstrate how robust their tech was to non-precomputed disturbances.
And between the two you've got the direction and planning layer, telling the robot to go from A to B with some set of suitably convoluted parameters that nobody but the operators would have understood. That planning layer might do all sorts of pre-computation and simulation but it needs to do it in the context of a noisy and possibly adversarial environment. That's equally true for Atlas as much as it was for BigDog, even when there's nobody actually kicking it. What I suspect the precompute and simulation is doing at that layer is a) checking for physical viability of the requested route, and b) parameter tuning in response to sensor readings over a number of runs. Not telling the robot the exact sequence of motions. But I'm nowhere near those teams (oh, I wish) to comment on whether that's true - maybe someone else round here is.
it's indeed a mess.
Even if private labs have a viable platform solution, people won't care unless they can clone it for free. Not a lot of incentive for design change, but building Kryten 2X4B-523P would be hilarious. =3
on edit: apologies if my analogy is not the best.