upvote
> Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

I wouldn't call it rewriting history to say they initially considered GPT-2 too dangerous to be released. If they'd applied this approach to subsequent models rather than making them available via ChatGPT and an API, it's conceivable that LLMs would be 3-5 years behind where they currently are in the development cycle.

reply
They said:

> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT‑2 along with sampling code (opens in a new window).

"Too dangerous to release" is accurate. There's no rewriting of history.

reply
Well, and it's being used to generate deceptive, biased, or abusive language at scale. But they're not concerned anymore.
reply
They've decided that the money they'll make is too important, who cares about externalities...

It's quite depressing.

reply