upvote
I have opencode with qwen 3.6 on my local machine. Just get the setup right and it's surprisingly fun to work with.
reply
I had a ton of fun setting up and trying it out locally (also opencode and one of the qwens.) I still don't have hardware powerful enough to feel like it's meaningfully productive, but all the learning I had to do (and all the bonus things I got curious about as the curtain peeled back) got my nerd brain all worked up, and finally seeing it work was exciting in that cool-new-experience way you don't often get to enjoy :)
reply
Yeah this is exactly how I felt! Never really felt excited about llms or agentic workflows before. Getting everything setup 100% local and tweaking it to exactly what I want and having it actually working quite well has been a really cool experience.
reply
If you already have a gaming pc, then it's worth exploring as the cost of boredom is negligible.
reply
I did tinker a lil with mine! RTX3080 with 10GB VRAM, 5600x with 64GB DDR4 - not very good but it was very fun and exciting to tinker with :)

My partner on the otherhand has an M3 Max 64GB which I've had way more success with. Setting up opencode and doing a tiny spec-driven Rust project and watching it kiiinda work was extraordinarily exciting!

reply
AMD 395+ w/128gb is all you need. the idea that mac studio is the default is a nerdfest.
reply
I admittedly haven't done a ton of research lately on AI capable PC hardware because of how nuts prices are right now, so I might be missing something...

...but all the AMD 395+ machines I can find are even more expensive than the aforementioned cheapest Mac Studio. Mac Studio starts at $2,000 (only 32GB), AMD 395+ 128GB machines seem to start at $3,000 from what I can see.

reply
the QWEN-3.5-CODER-NEXT fits in half the 128GB and the rest for context. with the right plugins, particularly context pruning, ive got it running over night by writing plans then implementing.

i do not know if theres a smaller model with same capability, but model size and context window at 128 seems like a sweet spot.

token speed really isnt a bother because im either just multitasking or working on the filling in the missing details.

regardless, i think comparing first VRAM sizes w/target model then speed for your cost efficiency. plus, a healthy skepticism of mac hardware costs.

reply