Apple engineers spend months trying to prompt engineer their way, thinking the prompter is at fault if the soon to be AGI system diverged. Some of these instructions were trending out there, as reveals of how naive Apple was at the time. They could be traced from the device's logs so not so much of a leak: Don't hallucinate, strictly follow instructions, followed by all sort of refined predicates, appended as if an LLM had reason
Then Apple released a paper to warn everyone (well, a few, and to save face) that we are getting fooled.
https://ml-site.cdn-apple.com/papers/the-illusion-of-thinkin...
In case Apple is a biased anti AI propagandist, here is a similar, more recent research paper from MIT and co:
Please put a date on your research papers! I could figure it out roughly by looking at the "last accessed" date on their citations - 2025-05-15.