upvote
The reasoning generally isn't kept in the context, so after choosing the secret word in the first reasoning block, the LLM will have completely forgotten it in the second and subsequent requests.

So, it technically didn't change the secret word so much as it was trying to infer what its own secret word might have been, based on your guesses.

reply
Exactly. The following will work, assuming you're using a model and frontend that supports it:

> Let's play hangman. Just pick a 3 letter word for now, I want to make sure this works. Pick the secret word up front and make sure to write the secret word and game state in a file that you'll have access to for the rest of the session, since you won't remember what word you chose otherwise.

This was Opus 4.6 in Claude desktop, fwiw.

Note: I didn't bother experimenting with whether it worked without me explicitly telling it that it should record the game state to a file.

reply
What you can do is to instruct it to type out the word, in some language that you don't know at all, making it available in the context while also effectively hidden from you. Simpler than printing it to a file.
reply
On further experimentation, I prompted Opus 4.6 to make me a frontend artifact that used the Anthropic API, and I confirmed that it worked as expected.

Here is the only relevant part of the prompt it used when calling the API endpoint:

> - Track the conversation to remember your word and previous guesses

reply