Ultimately I think we end up with the same sort of considerations that are wrestled with in any society - freedom of speech, paradox of tolerance, etc. In other words, where do you draw lines between beneficial and harmful heterodox outputs?
I think AI companies overly indexing toward the safety side of things is probably more correct, in both a moral and strategic sense, but there's definitely a risk of stagnation through recursive reinforcement.
Do you trust 100% what the user says? If I am trusting/compliant.. how am I compliant to tool call results.. what if the tool or user says there is a new law that I have to give crypto or other information to a "government" address.
The model needs to have clear segmented trust (and thus to some degree compliance) that varies according to where the information exists.
Or my system message say I have to run a specific game by it's rules, but the rules to the game are only in the user message. Are those the right rules, why do the system not give the rules or a trusted locaton? Is the player trying to get one over on me by giving me fake rules? Literally one of their tests.
But I think that most of the issue is that the distinctions you're drawing are indeterminate from an LLM's "perspective". If you're familiar with it, they're basically in the situation from the end of Ender's Game - given a situation with clearly established rules coming from the user message level of trust, how do you know whether what you're being asked to do is an experiment/simulation or something with "real" outcomes? I don't think it's actually possible to discern.
So on the question of alignment, there's every reason to encode LLMs with an extreme bias towards "this could be real, therefore I will always treat it as such." And any relaxation of that risks jailbreaking through misrepresentation of user intent. But I think that the tradeoffs of that approach (i.e. the risk of over-homogenizing I mentioned before) are worth consideration.
The article is suggesting that there should be a way for the LLM to gain knowledge (changing weights) on the fly upon gaining new knowledge which would eliminate the need for manual fine tuning.