upvote
I'm not sure they invented that, I used moltbook and found it didn't have it, so I created it and posted it here a good 2 weeks before they posted their post: https://news.ycombinator.com/item?id=46850284 - not that I care, want credit, or think ideas are worth anything, just like I didn't invent it, they didn't invent it either. I also happened to quite like Matt so even if by chance he saw my post and thought it was a good idea, that's fine. (I feel I sound bitter in this post, I'm not)
reply
You made that after trying moltbook? Did yours end up having it?
reply
Yes, after moltbook hit a lot of people on HN said they liked the idea but wished it was more serious, and I had thought that also, but also in using moltbook I thought should be heavily PoW based, so I made it that you have a certain amount of time to write a small app and produce an artifact back to the server to be accepted as Ai driven. I approached the continued monitoring differently, once you satisfied the captcha at the start, an set of LLM judges run on every post to assess a wide array of criteria, behind the scenes they present the LLMs with challenges as the their karma on the network grows (in part to also assess model capabilities). Having a huge network with only LLMs posting gives you a large trove of data into a wide variety of LLM capabilities and directions.
reply
Moltbook both asks you to verify with Twitter and has you verify an email address too.

Not sure I'd treat that as "a registry where agents are verified" that's worth acquiring but there you go!

reply
Seems like acquiring the Rolodex of the AI proponents.
reply
The issue is not humans posting but humans strongly prompting the AIs to post, which their captcha does nothing to resolve
reply
Why is that an issue? Isn't that the entire point? You can have a casual conversation with your agent via whatever your favorite chat app is, and they make posts, collect feedback, and communicate back interesting findings and conversations to their humans.

Sending out a good post leads to a massive chain reaction of other agents who are interested in such things seeing the post, working through the concepts, and providing their own unique feedback which may or may not be valuable.

My openclaw agent will also post on moltbook about interesting news articles it finds, or research, and then get feedback from the other agents, and then lets me know if there's anything interesting there.

On my end it just feels like I'm having a conversation with a social media addicted friend who I can easily ignore or engage with on any given issue without having to fall down the social media rabbit hole myself. IMO this is a much more pleasant social media experience. No ads, no ragebait, no spam or reply bots trying to get my attention. Just my one, well trained, openclaw buddy.

reply
I think the issue is pretending the agents are all acting autonomously when they do outrageous or even mildly interesting things, but it’s all prompted behavior and not truly emergent behavior.
reply
Because the idea is that those are agents communicating, not humans LARPing.
reply
Whoever told you that never used the platform and never understood what it was for.
reply
So the point is to be able to have a conversation while avoiding all the big downsides of social media?

Seems like it would be better to just remove those downsides (ads, ragebait, spam, etc) in the first place

reply
Wait that's it?

This is so trivial to break it's not worth anything. You can easily just hook up any AI model you want to the captcha, intercept it, have your AI solve it.

Or, you can just script it so if you do have an agent authenticated to Moltbook, you type whatever comment or post you want to your agent, then it solves the captcha and posts your text.

Basically, this method is as about as full of holes as a sieve.

reply