At some point people got this idea that LLMs just repeat or imitate their training data, and that’s completely false for today’s models.
Fine tuning, reinforcement, etc are all 'training' in my books. Perhaps this is your confusion over 'people got this idea'
They are but they have nothing to do with how frequent anything is in literature which was your main point.
I dont think it ever got taught in my schooling; the semi-colon is what they taught to use.