Today, lots of integer compute happens on local devices for some purposes, and in the cloud for others.
Same is already true for matmul, lots of FLOPS being spent locally on photo and video processing, speech to text, …
No obvious reason you wouldn’t want to specialize LLM tasks similarly, especially as long-running agents increasingly take over from chatbots as the dominant interaction architecture.
Right now, certainly. Things change. What was a datacenter rack yesterday could be a laptop tomorrow.