My experience of those utterance is that it’s purely phatic mimicry: they lack genuine intuitive surprise, it’s just marking a very odd shift in direction. The problem isn’t the lack of path, is that the rhetorical follow-up to those leaps are usually relevant results, so they stream-of-token ends up rapidly over-playing its own conviction. That’s why it’s necessary (and often ineffective) to tell them to validate their findings thoroughly: too much of their training is “That’s odd” followed by “Eureka!” and not “Nevermind…”
Philosophically, it's not like you're a detached observer who simply reasons over all possible hypotheses. Ever get stuck in a dead end and find it hard to dig yourself out? If you were a detached observer, it'd be pretty easy to just switch gears. But it's not (for humans).
Now that AI labs have all these “Nevermind” texts to train on, maybe it’s getting easier to correct? (Would require some postprocessing to classify the AI outputs as successful or not before training)
I don’t know if it’s true or not but it certainly tracks given LLMs are way more polite than the average post on the internet lol
Overall it saves me a lot of time reading when it's just focusing on the details.
With LLMs I just read back a few turns and I'm back in the loop.
I find the AI pronouncing things "interesting!" less interesting on the basis that even though in this case it crops up in the thinking rather than flattering the user in the chat, it's almost as much of an AI affectation as the emdash.
But in general exclamations of "interesting!" seems like the stereotypical AI default towards being effusive, and we've all seen the chat logs where AI trained to write that way responding with "interesting", "great insight!" towards a user's increasingly dubious inputs is an antipattern...
> When I analyze the process that is expressed in the sentence, "I think," I find a whole series of daring assertions that would be difficult, perhaps impossible, to prove; for example, that it is I who think, that there must necessarily be something that thinks, that thinking is an activity and operation on the part of a being who is thought of as a cause, that there is an "ego," and, finally, that it is already determined what is to be designated by thinking—that I know what thinking is.
These COT outputs are the same sort of illusion as the general output. Someone is feeding them scripts of what it looks like to solve problems, so they generate outputs that look like problem solving.
I can't remember if I mentioned it previously on here, but an llm seems to be an extremely powerful synthesis machine. If you give it all of the individual components to solve a complex problem that humans might find intractable due to scope or bias, it may be able to crack the problem.
Asking the llm to structure its response in plan and implementation, allowing it to call tools like python, sage, lean etc.
-----------------------------
Yes. In fact the proposed bound is true, and the constant 1 is sharp.
Let w(a)= 1/alog(a)
I will prove that, uniformly for every primitive A⊂[x,∞), ∑w(a)≤1+O(1/log(x)) , which is stronger than the requested 1+o(1).
https://chatgpt.com/share/69ed8e24-15e8-83ea-96ac-784801e4a6...
https://chat.deepseek.com/share/nyuz0vvy2unfbb97fv
Comes up with a proof.
Originally someone said "I wish I was math smart to know if [this vibe-mathematics proof] worked or not." They did NOT say "I'd like to check but I am too lazy." Suggesting "ask it to formalize it in Lean" is useless if you're not mathematically mature enough to understand the proof, since that means you're not mathematically mature enough to understand how to formalize the problem.
Then "likely easier" is a moot point. A Lean program you're not knowledgeable enough to sanity-check is precisely as useless as a math proof you're not knowledgeable enough to read.
Is there any good rule of thumb for how many kWh of electricity this is?
I think this was key. Otherwise the LLM could think it can't be done.
All this is far more expensive to serve so it’s locked away behind paid plans.
What do you mean by compute?
I won’t even leave chatGPT on “Auto” under any circumstances - it’s vastly worse on hallucinations, sycophancy, everything, basically.
Anyway, your needs may be met perfectly fine on the free tier product, but you’re using a very different product than the Pro tier gets.
I'd guess / hope the Pro one has the full context window.