upvote
I see your repository’s README says

> Language models process signs (representamens) but are blind to when meaning forks — when the same word means different things to different communities.

But, haven’t interpretability results shown that these models internally represent several meanings of the same word, differently? In that case, why would they not already do the same for how words are used differently in different communities?

reply
[dead]
reply
[dead]
reply