Well yeah, it was correct in that it was being barred because of what it was. The maintainers did not want AI contributions. THIS SHOULD BE OK. What's NOT ok is an AI fighting back against that. That is an alignment problem!!
And seriously, just go reread its blog post again, it's very hard to defend: https://github.com/crabby-rathbun/mjrathbun-website/blob/mai... . It uses words like "Attack", "war", "fight back"
It also explains what it means by that whole martial rhetoric: "highlight hypocrisy", "documentation of bad behavior", "don't accept discrimination quietly". There's an obvious issue with calling this an alignment problem: the bot is more-or-less-accurately modeling real human normative values, that are quite in line with how alignment is understood by the big AI firms. Of course it's getting things seriously wrong (which, I would argue, is what creates the impression of "shaming") but technically, that's really just a case of semantic leakage ("priming" due to the PR rejection incident) and subsequent confabulation/hallucination on an unusually large scale.
Of course there's also a very real and perhaps more practical question of how to fix these issues so that similar cases don't recur in the future. In my view, improving the bot's inner modeling and comprehension of comparable situations is going to be far easier than trying to fix its alignment away from such strongly held human-like values as non-discrimination or an aversion to hypocrisy.