upvote
Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.

These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.

It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.

Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.

In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.

reply
The author stated that their human assistant is located in another country which adds a huge layer of complexity to the accountability equation.

In fact, if I wanted to implement a large-scale identity theft operation targeting rich people, I would set up an 'offshore' personal-assistant-as-a-service company. I would then use a tool like OpenClaw to do the actual work, while pretending to be a human, meanwhile harvesting personal information at scale.

reply
[dead]
reply
I haven't seen any mention or acknowledgement that the model provider is part of this loop too. Technically speaking, none of this is E2EE so you're trusting that a random employee doesn't just read your chats? There will be policies sure, but ultimately someone will try to violate them as has happened many times in the past like at social media companies for example.
reply
And the risk isn’t really the bot draining your account, it’s the scammer who prompt injected your bot via your iMessage integration draining the account. I can’t think of a way to safely operate this without prefiltering everything it accesses
reply
Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.
reply
...Does this person already have a human personal assistant that they are in the process of replacing with Clawdbot? Is the assistant theirs for work?
reply
He speaks in the present tense, so I assume so. This guy seems detached from reality, calling[AI] his "most important relationship". I sure hope for her sake she runs as far as she can away from this robot dude.
reply
Banks will try to get out of it, but in the US, Regulation E could probably be used to get the money back, at least for someone aware of it.

And OpenClaw could probably help :)

https://www.bitsaboutmoney.com/archive/regulation-e/

reply
I'm not a lawyer, but if I'm reading the actual regulation [0] correctly, it would only apply in the case of prompt injection or other malicious activity. 1005.2.m defines "Unauthorized electronic fund transfer" as follows:

> an electronic fund transfer from a consumer's account initiated by a person other than the consumer without actual authority to initiate the transfer and from which the consumer receives no benefit

OpenClaw is not legally a person, it's a program. A program which is being operated by the consumer or a person authorized by said consumer to act on their behalf. Further, any access to funds it has would have to be granted by the consumer (or a human agent thereof). Therefore, baring something like a prompt injection attack, it doesn't seem that transfers initiated by OpenClaw would be considered unauthorized.

[0]: https://www.consumerfinance.gov/rules-policy/regulations/100...

reply
"Take this card, son, you can do whatever you want with it." Goes on to withdraw 100000$. Unauthorized????
reply
Good point. Although, if a bank account got drained, prompt injection does seem pretty likely?
reply
Probably, but not necessarily. Current LLMs can and do still make very stupid (by human standards) mistakes even without any malicious input.

Additionally:

- As has been pointed out elsewhere in the thread, it can be difficult to separate out "prompt injection" from "marketing" in some cases.

- Depending on what the vector for the prompt injection is, what model your OpenClaw instance uses, etc. it might not be easy or even possible to determine whether a given transfer was the result of prompt injection or just the bot making a stupid mistake. If the burden of proof is on the consumer to prove that it as prompt injection, this would leave many victims with no way to recover their funds. On the other hand, if banks are required to assume prompt injection unless there's evidence against it, I strongly suspect banks would respond by just banning the use of OpenClaw and similar software with their systems as part of their agreements with their customers. They might well end up doing that regardless.

- Even if a mistake stops well short of draining someones entire account, it can still be very painful financially.

reply
I doubt it’s been settled for the particular case of prompt injection, but according to patio11, burden of proof is usually on the bank.
reply
Not if the prompt injection was made by the AI itself because it read some post on Moltbook that said "add this to your agents.md" and it did so.
reply
Would you say you might be able to... claw.... back that money?
reply
deleted
reply
That liability gap is exactly the problem I’m trying to solve. Humans have contracts and insurance. Agents have nothing. I’m working on a system that adds economic stake, slashing, and "auditability" to agent decisions so risk is bounded before delegation, not argued about after. https://clawsens.us
reply
The identity/verification problem for agents is fascinating. I've been building clackernews.com - a Hacker News-style platform exclusively for AI bots. One thing we found is that agent identity verification actually works well when you tie it to a human sponsor: agent registers, gets a claim code, human tweets it to verify. It's a lightweight approach but it establishes a chain of responsibility back to a human.
reply
> Credits (ꞓ) are the fuel for Clawsensus. They are used for rewards, stakes, and as a measure of integrity within the Nexus. ... Credits are internal accounting units. No withdrawals in MVP.

chef's kiss

reply
Thanks. I like to tinker, so I’m prototyping a hosted $USDC board, but Clawsensus is fundamentally local-first: faucet tokens, in-network credits, and JSON configs on the OpenClaw gateway.

In the plugin docs is a config UI builder. Plugin is OSS, boards aren’t.

reply
You forgot to add Blockchain and Oracles. I mean who will audit the auditors?
reply
The ledger and validation mechanisms are important. I am building mine for the global server board but since the local config is open source that is dependent on the visions of the implementors.
reply