upvote
TBH I never understood people trying to run LLM locally. Just rent a powerful machine in the cloud for few hours. It's cheap enough, because you don't need to own a hardware. It doesn't introduce a dependency because there are hundreds of hosters. It doesn't compromise your data, because nobody would extract data from your VM, not until you're under an investigation, anyway, and even in that case just use different jurisdiction.

Spending humongous amount of money to get machine that'll felt obsolete in 2 years? I don't know.

reply
"Local AI is not ready" > proceeds to run a 7 year old budget GPU

You're like the kid showing up to a test without a pencil.

It's ridiculous for you to suggest that an advanced AI model needs to run on your budget 7 year old graphics card that is already out of date for even today's gaming. My parents spent $2500 on a computer in 1995 and that was a 166Mhz Pentium 1. If they spent that money today it would be $5261. Think of what you can get for amount of money. Then you're over here trying to say a budget graphics card needs to somehow compete with the bleeding edge of computer innovation.

You do, in fact, need to spend money on appropriate gear if you expect to participate.

reply
If you want AI image generation and are willing to wait a little longer, you don't even need a GPU: https://news.ycombinator.com/item?id=32642255
reply
I've played with SD plenty. CPU even becomes manageable at low resolutions. But uh CPU/GPU is starting to blur now with these new AMD inference CPUs with built in GPUs. And ARM based machines like Macs. I wish more people on HN were using this stuff so we could have fun conversations about it instead of arguing over whether or not we should even be using these tools.
reply
When Stallman was getting started writing emacs in the early 80s, Unix machines were vastly out of reach price wise for the common home user, but he did his open source work anyway, and eventually the 386 came along.
reply