upvote
Same here. I really hope in a near future local model will be good enough and hardware fast enough to run them to become viable for most use cases
reply
No need to hope; it is inevitable.
reply
Is it inevitable though? Open-weight models large enough to come close to an API model are insanely expensive to run for con/prosumers. I'd put the “expensive” bar at ≥24GB since that's already well into 4 digits, which gives you quite many months of a subscription, not including the power will for >400W continuous.

Color me pessimistic, but this feels like a pipe dream.

reply