> When you rotate ")" counterclockwise 90°, it becomes a wide, upward-opening arc — like ⌣.
but I'm pretty sure that's what you get if you rotate it clockwise.
You seem to think it's not 'just' tensor arithmetic.
Have you read any of the seminal papers on neutral networks, say?
It's [complex] pattern matching as the parent said.
If you want models to draw composite shapes based on letter forms and typography then you need to train them (or at least fine-tune them) to do that.
I still get opposite (antonym) confusion occasionally in responses to inferences where I expect the training data is relatively lacking.
That said, you claim the parent is wrong. How would you describe LLM models, or generative "AI" models in the confines of a forum post, that demonstrates their error? Happy for you to make reference to academic papers that can aid understanding your position.