Far more likely is they spin up a defence focused subsidiary with slightly different policies if they really want to sell to them.
We already have Groq, Celebras and AWS Bedrock and others in the inference of open models space, so the model is usable that way.
Is Claude better than Llama, Gwen etc. Probably. For now.
But for how long? Dissolving means relying on Meta or Deepseek etc. to pick up and carry on tuning. Otherwise it'll be as useful as a GPT2 or Atari ST eventually in a competitive environment.
Also open sourcing the weights is handing it over to DoD (aka DoW).
Complicated question but probably not the best move. Keep going means keep working on safety research.
I mean what if all the employees stripped off their clothes and walked through the streets naked while barking, then called up their middle school math teachers and barked live dogs then moved to a commune and stood on their heads.
> Writing out a thought I had, someone please critique my reasoning here...
I mean to critique your reasoning, it makes sense to also include a criteria of something they might reasonably do. There are an infinite number of unhinged things a group of people could in theory do. But maybe start with something they would actually have an incentive to do.
Why would they voluntarily dissolve their company, put themselves out of work, release their crown jewels and get nothing for it? Yes it's unhinged but unless I'm missing something bug, they wouldn't do that because they wouldn't at all want that to happen.
Are you asking how dangerous open-weight models are? You could start with:
Ryan Greenblatt on the AI Alignment Forum : "When is it important that open-weight models aren't released?" https://www.alignmentforum.org/posts/TeF8Az2EiWenR9APF/when-...
From the Centre for Future Generations : "Can open-weight models ever be safe?" https://cfg.eu/can-open-weight-models-ever-be-safe/
From OpenAI authors, far from neutral : "Estimating Worst-Case Frontier Risks of Open-Weight LLMs" https://arxiv.org/abs/2508.03153