upvote
You do have to care about token usage when chosing how to scale your hardware. If you do a negligible amount of AI inference for occasional simple Q&A (which is what most people do), you can get away with a very lean and cheap setup even when running very large, sophisticated models. Agentic use with function calls and responses etc. raises the amount of tokens you use over time by at least one order of magnitude.
reply