As far as these model releases, I believe the term is “open weights”.
We may not have the full logic introspection capabilities, the ease of modification (though you can still do some, like fine-tuning), and reproducibility that full source code offers, but open weight models bear more than a passing resemblance to the spirit of open source, even though they're not completely true to form.
With fully open source software (say under GPL3), you can theoretically change anything & you are also quite sure about the provenience of the thing.
With an open weights model you can run it, that is good - but the amount of stuff you can change is limited. It is also a big black box that could possibly hide some surprises from who ever created it that could be possibly triggered later by input.
And lastly, you don't really know what the open weight model was trained on, which can again reflect on its output, not to mention potential liabilities later on if the authors were really care free about their training set.