upvote
We implement rate-limiting and queuing to ensure fairness, but if there are a massive amount of people with huge and long queries, then there will be waits. The question is whether people will do this and more often than not users will be idle.
reply
Rate limit essentially is a token limit
reply
It depends on how it's implemented. If it's a fixed window, then your absolute ceiling is tokens/windows in a month. If it's a function of other usage, like a timeshare, you're still paying for some price for a month and you get what you get without paying more per token. There's an intrinsic limit based on how many tokens the model can process on that gpu in a month anyway, even if it's only you.
reply
Time x capacity is also a limit. There's always a limit.
reply
Is there any way to buy into a pool of people with similar usage patterns? Maybe I'm overthinking it, but just wondering
reply
I think it'd be best to pool with people with different patterns, not the same patterns. Perhaps it would be best to pool with people in different timezones, and/or with different work/sleep schedules.

If everyone in a pool uses it during the ~same periods and sleeps during the ~same periods, then the node would oscillate between contention and idle -- every day. This seems largely avoidable.

(Or, darker: Maybe the contention/idle dichotomy is a feature, not a bug. After all, when one has control of $14k/month of hardware that is sitting idle reliably-enough for significant periods every day, then one becomes incentivized to devise a way to sell that idle time for other purposes.)

reply
This is basically why the big companies can sell subscriptions for cheaper than API costs. First priority can go to API users, lower priority subscription users get slotted in as space/SLO allows, and then sell the remaining idle GPU to batch users and spare training. Oh and geography shift as necessary for different nations working hours.
reply
To be fair this is the price you pay for sharing a GPU. Probably good for stuff that doesn't need to be done "now" but that you can just launch and run in the background. I bet some graphs that show when the gpu is most busy could be useful as well
reply
This problem sounds like an excellent opportunity. We need a race to the bottom for hosting LLMs to democratize the tech and lower costs. I cheer on anyone who figures this out.
reply
This is classic queuing theory, rate limits etc. I don't have an answer but I would look there.
reply
What if you could group multiple of them. Long queries run on the group that’s commonly doing those. Shorter queries que faster because they’ll execute faster.
reply
Ultimately the most sensible way of handling this is you end up with "surge pricing" for the highest-priority tokens whenever the inference platform is congested, over and above the base subscription (but perhaps ultimately making the subscription a bit cheaper).
reply
Also, cache ejection during contention qill degrade everyones service.

I question whether they actually understand LLMs at scale.

reply
I suppose it's meant to be a "minimum viable" third-party inference platform, where you're literally selling subscription-based access (i.e. fixed price, not PAYGO by token) to a single GPU cluster, and then only once enough users subscribe to make it viable (which is very nice from them, it works like a Kickstarter/group coupon model and creates a guaranteed win-win for the users). But they could easily expand to more than just the minimum cluster size, which would somewhat improve efficiency. (Deepseek themselves scale out their model over huge amounts of GPUs, which is how they manage to price their tokens quite cheap.)
reply