Why haven't they broken out yet, I wonder, if they're more efficient for inference and LLM costs are now weighted towards inference over training?
But as far as i know it currently supports just that + tensorflow (which nobody uses it anymore, least here). And last we tried, so much of our kernels needs rework that it’s not worth the effort.
This may change since ironwood but we haven’t tried that generation.
Microsoft is in the same boat with Azure.