I attempted to explore the works of Kinoko Nasu/TYPE-MOON through its characters and the relationships across works and it was mostly nonsense. Sure it had some broad relations correct, but it presented a tiny set of meaningful characters and only attempted to touch Fate/Stay-Night and Tsukihime.
Even more damning was that it produced garbled text for a few of the textual representations and often even if the lettering was clean, the grammar was off.
Will Silicon Valley executives ever accept this reality? If we acquiesce and admit that LLMs are a good tool for prototyping and boilerplate-reduction, but not finished products-- is that when the bubble finally bursts?
I used to feel job safety in the knowledge that AI labs weren't likely to solve the hallucination problem. Then it dawned on me that they don't need to — they just need to reduce our collective expectations.