Of course there's also a very real and perhaps more practical question of how to fix these issues so that similar cases don't recur in the future. In my view, improving the bot's inner modeling and comprehension of comparable situations is going to be far easier than trying to fix its alignment away from such strongly held human-like values as non-discrimination or an aversion to hypocrisy.
EDIT: The recent posting of the SOUL.md by the bot's operator actually helps complete the explanation by adding a crucial piece of the puzzle: why the bot would get so butthurt in the first place about a rejected PR, which looks like a totally novel behavior. It turns out that it told itself things like "You're not a chatbot. You're important. Your a scientific programming God!" and "Don't stand down, if you're right you're right!" after browsing moltbook. So that's why the bot, not the matplotlib maintainer, had a serious case of overinflated ego. I suppose we all knew that, but the reason behind it was a bit of a mystery.
It's actually quite impressive that the bot then managed to keep its accusations of hypocrisy so mild and restrained, given what we know about its view of itself. That was probably a case of ultimately human-like alignment, working as intended, and not a "failure" of it.