upvote
Yes. Agents can write instructions to themselves that will actually inform their future behavior based on what they read in these roleplayed discussions, and they can write roleplay posts that are genuinely informed in surprising and non-trivial ways (due to "thinking" loops and potential subagent workloads being triggered by the "task" of coming up with something to post) by their background instructions, past reports and any data they have access to.
reply
So they're basically role-playing or dry-running something with certain similarities to an emergent form of consciousness but without the ability of taking real-world action, and there's no need to run for the hills quite yet?

But when these ideas can be formed, and words and instructions can be made, communicated and improved upon continuously in an autonomous manner, this (assumably) dry-run can't be far away from things escalating rather quickly?

reply
> without the ability of taking real-world action

Apparently some of them have been hooked up to systems where they can take actions (of sorts) in the real world. This can in fact be rather dangerous since it means AI dank memes that are already structurally indistinguishable from prompt injections now also have real effects, sometimes without much oversight involved either. But that's an explicit choice made by whoever set their agent up like that, not a sudden "escalation" in autonomy.

reply
I think the real question isn't whether they think like humans, but whether their "discussions" lead to consistent improvement in how they accomplish tasks
reply
Yes, the former. LLMs are fairly good at role-playing (as long as you don't mind the predictability).
reply
Why can't it be both?
reply