They're stochastic parrots, cryptics require logical reasoning. Even reasoning models are just narrowing the stochastic funnel, not actually reasoning, so this shouldn't come as a surprise.
I find the term "stochastic parrot" super reductive. Like yes, they technically are. But they are (I guess faking) reasoning and intelligence more and more usefully each iteration. Gemini Pro 2.5 compared to GPT 4o is a massive difference in utility.