How serious a risk is poisoned weights?
Can we leverage the cryptobros into using LLM training as a proof of work?
Having an LLM use a web search tool isn't the same thing as researching a topic, IMO, because it's so ephemeral and needs constant reinforcement. LLMs aren't learning machines, they're static ones.