upvote
I did this and I use small local models as a productivity booster. It's been refreshing
reply
Hints or tips on how to start with local models? I’m considering a new MacBook Pro and wondering if I should take that into account.
reply
The biggest hint I have is set a budget. Then try some models out on either cloud instances or a computer you own. See if they work for you.

Spec your machine accordingly. Some models I recommend trying to get a feel for what's out there. Qwen 3.6 35b a3b, granite4.1 8b, llama 3.2 3b.

There are plenty of others but those give a good taste for different sizes and what they can do. If it's not enough then you are out maybe 5 bucks.

Also check in with r/localllama they have a bunch of people who can help you go further, spec machines, get better performance and results. If you don't want to post that's cool but there are lots of comments on how to get going. They are pretty friendly though so I'd read the rules and make a post asking for help

reply
Admittedly havent used deepseek v4, but v3 was so overhyped and bad that I'm reluctant to wasting my time on it.

Maybe you will inspire me to use it.

reply
You can use an LLM, review the code and therefore avoid surprising bugs and unnecessary code in your end result.
reply
So close to doing the same
reply
installing a local model gives you time to work on the important code and let the ai do the drudgery
reply