upvote
You are just trading opex for capex. Local GPUs aren't free.
reply
True, but this is not only a trade-off between opex and capex.

Local inference using open weight models provides guaranteed performance which will remain stable over time, and be available at any moment.

As many current HN threads show, depending on external AI inference providers is extremely risky, as their performance can be degraded unpredictably at any time or their prices can be raised at any time, equally unpredictably.

Being dependent on a subscription for your programming workflow is a huge bet, that you will gain more from a slightly higher quality of the proprietary models than you will lose if the service will be degraded in the future.

As the recent history has shown, many have already lost this bet.

reply