You have no idea what has been baked into the weights in the training process. In theory you could find biases and attempt to "patch" them out, but its a vastly different process vs. patching machine code.
Consider what would happen if Google's open weight models were best at writing code targeting Google's services vs. their competitors? Is this something that could be patched? What if there were more subtle differences that you only notice much later after some statistical analysis?
Local ai is not ready, and if you think it is, prove me wrong with a detailed guide running commodity hardware with complete setup steps that can use a decently sized model.
I spent 2 weeks trying to get anything running - 8gb RX550XT, 12gb ram, 8core cpu. I even tried turboquant to lower memory utilization and still couldnt even get a 3B or 4B model loaded, and anything lower wont suit my needs (3/4B are even pushing it).
Spending humongous amount of money to get machine that'll felt obsolete in 2 years? I don't know.
You're like the kid showing up to a test without a pencil.
It's ridiculous for you to suggest that an advanced AI model needs to run on your budget 7 year old graphics card that is already out of date for even today's gaming. My parents spent $2500 on a computer in 1995 and that was a 166Mhz Pentium 1. If they spent that money today it would be $5261. Think of what you can get for amount of money. Then you're over here trying to say a budget graphics card needs to somehow compete with the bleeding edge of computer innovation.
You do, in fact, need to spend money on appropriate gear if you expect to participate.