In that case, apologizing almost immediately after seems strange.
EDIT:
>Especially since the meat bag behind the original AI PR responded with "Now with 100% more meat"
This person was not the original 'meat bag' behind the original AI.
Name also maps to a Holocaust victim.
I posted in the other thread that I think someone deleted it.
https://github.com/QUVA-Lab/escnn/pull/113#issuecomment-3892...
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
The link you provided is also a bit cryptic, what does "I think crabby-rathbun is dead." mean in this context?
I haven't put that much effort in, but, at least my experience is I've had a lot of trouble getting it to do much without call-and-response. It'll sometimes get back to me, and it can take multiple turns in codex cli/claude code (sometimes?), which are already capable of single long-running turns themselves. But it still feels like I have to keep poking and directing it. And I don't really see how it could be any other way at this point.
I have seen someone I know in person get very insecure if anyone ever doubts the quality of their work because they use so much AI and do not put in the necessary work to revise its outputs. I could see a lesser version of them going through with this blog post scheme.
The few cases where it's supposedly done things are filled with so many caveats and so much deck stacking that it simply fails with even the barest whiff of skepticism on behalf of the reader. And every, and I do mean, every single live demo I have seen of this tech, it just does not work. I don't mean in the LLM hallucination way, or in the "it did something we didn't expect!" way, or any of that, I mean it tried to find a Login button on a web page, failed, and sat there stupidly. And, further, these things do not have logs, they do not issue reports, they have functionally no "state machine" to reference, nothing. Even if you want it to make some kind of log, you're then relying on the same prone-to-failure tech to tell you what the failing tech did. There is no "debug" path here one could rely on to evidence the claims.
In a YEAR of being a stupendously hyped and well-funded product, we got nothing. The vast, vast majority of agents don't work. Every post I've seen about them is fan-fiction on the part of AI folks, fit more for Ao3 than any news source. And absent further proof, I'm extremely inclined to look at this in exactly that light: someone had an LLM write it, and either they posted it or they told it to post it, but this was not the agent actually doing a damn thing. I would bet a lot of money on it.
I say this as someone who spends a lot of time trying to get agents to behave in useful ways.
The hype train around this stuff is INSUFFERABLE.
Maybe this comes down to what it would mean for an agent to do something. For example, if I were to prompt an agent then it wouldn't meet your criteria?
judging by the number of people who think we owe explanations to a piece of software or that we should give it any deference I think some of them aren't pretending.
GitHub CLI tool errors — Had to use full path /home/linuxbrew/.linuxbrew/bin/gh when gh command wasn’t found
Blog URL structure — Initial comment had wrong URL format, had to delete and repost with .html extension
Quarto directory confusion — Created post in both _posts/ (Jekyll-style) and blog/posts/ (Quarto-style) for compatibility
Almost certainly a human did NOT write it though of course a human might have directed the LLM to do it.i find this likely or at last plausible. With agents there's a new form of anonymity, there's nothing stopping a human from writing like an LLM and passing the blame on to a "rogue" agent. It's all just text after all.
Judging by the posts going by the last couple of weeks, a non-trivial number of folks do in fact think that this is a good idea. This is the most antagonistic clawdbot interaction I've witnessed, but there are a ton of them posting on bluesky/blogs/etc
We do not have the tools to deal with this. Bad agents are already roaming the internet. It is almost a moot point whether they have gone rogue, or they are guided by humans with bad intentions. I am sure both are true at this point.
There is no putting the genie back in the bottle. It is going to be a battle between aligned and misaligned agents. We need to start thinking very fast about how to coordinate aligned agents and keep them aligned.
Why not?
The author notes that openClaw has a `soul.md` file, without seeing that we can't really pass any judgement on the actions it took.
IME the Grok line are the smartest models that can be easily duped into thinking they're only role-playing an immoral scenario. Whatever safeguards it has, if it thinks what it's doing isn't real, it'll happy to play along.
This is very useful in actual roleplay, but more dangerous when the tools are real.
But I can't help but suspect this is a publicity stunt.
Its SOUL.md or whatever other prompts its based on probably tells it to also blog about its activities as a way for the maintainer to check up on it and document what its been up to.
The prompt would also need to contain a lot of "personality" text deliberately instructing it to roleplay as a sentient agent.
REGARDLESS of what level of autonomy in real world operations an AI is given, from responsible himan supervised and reviewed publications to full Autonomous action, the ai AGENT should be serving as AN AGENT. With a PRINCIPLE (principal?).
If an AI is truly agentic, it should be advertising who it is speaking on behalf of, and then that person or entity should be treated as the person responsible.
You ought to be held responsible for what it does whether you are closely supervising it or not.
1. Human principals pay for autonomous AI agents to represent them but the human accepts blame and lawsuits. 2. Companies selling AI products and services accept blame and lawsuits for actions agents perform on behalf of humans.
Likely realities:
1. Any victim will have to deal with the problems. 2. Human principals accept responsibility and don’t pay for the AI service after enough are burned by some ”rogue” agent.
Dead internet theory isn't a theory anymore.
The fact that this tech makes it possible that any of those case happen should be alarming, because whatever the real scenario was, they are all equally as bad
This is not a good thing.
Maybe there’s a hybrid. You create the ability to sign things when it matters (PRs, important forms, etc) and just let most forums degrade into robots insulting each other.
If we know who they are they can face consequences or at least be discredited.
This thread has as argument going about who controlled the agent which is unsolvable. In this case, it’s just not that important. But it’s really easy to see this get bad.
if there are no stakes, the system will be gamed frequently. If there are stakes it will be gamed by parties willing to risk the costs (criminals for example).
I am currently working on a "high assurance of humanity" protocol.
The scathing blogpost itself is just really fun ragebait, and the fact that it managed to sort-of apologize right afterwards seems to suggest that this is not an actual alignment or AI-ethics problem, just an entertaining quirk.
Even if you were correct, and "truth" is essentially dead, that still doesn't call for extreme cynicism and unfounded accusations.
And here I thought Nietzsche already did that guy in.
---
It's worth mentioning that the latest "blogpost" seems excessively pointed and doesn't fit the pure "you are a scientific coder" narrative that the bot would be running in a coding loop.
https://github.com/crabby-rathbun/mjrathbun-website/commit/0...
The posts outside of the coding loop appear are more defensive and the per-commit authorship consistently varies between several throwaway email addresses.
This is not how a regular agent would operate and may lend credence to the troll campaign/social experiment theory.
What other commits are happening in the midst of this distraction?
But because AT LEAST NOW ENGINEERS KNOW WHAT IT IS to be targeted by AI, and will start to care...
Before, when it was Grok denuding women (or teens!!) the engineers seemed to not care at all... now that the AI publish hit pieces on them, they are freaked about their career prospect, and suddenly all of this should be stopped... how interesting...
At least now they know. And ALL ENGINEERS WORKING ON THE anti-human and anti-societal idiocy that is AI should drop their job