upvote
I agree with the sentiment that there are use cases for web scraping where an agent is preferable to a cron job, but I think your particular example can certainly be achieved with a cron job and a basic parser script. Just have Claude write it.
reply
I didn't say it's not doable. I'm not even saying it's hard. But nothing beats telling Claw to do it for me while I'm in the middle of groceries.

Put another way: If it can do it (reliably), why on Earth would I babysit Claude to write it?

The whole point is this: When AI coding became a thing, many folks rediscovered the joy of programming, because now they could use Claude to code up stuff they wouldn't have bothered to. The barrier to entry went down. OpenClaw is simply that taken to the next level.

And as an aside, let's just dispense with parsing altogether! If I were writing this as a script, I would simply fetch the text of the page, and have the script send it to an LLM instead of parsing. Why worry about parsing bugs on a one-off script?

reply
Scripts fail. Agents exfiltrate your data because someone hacked the school's website with prompt injections. Make sure it's a choice and not ignorance of the risks.
reply
> Scripts fail.

Which is totally fine for the majority of tasks.

> Agents exfiltrate your data

They can only exfiltrate the data you give them. What's the worst that prompt injection attack will give them?

reply
Container security is an entire subfield of infosec. For example: https://github.com/advisories/GHSA-w235-x559-36mg

People on both sides are just getting started finding all the ways to abuse or protect you from security assumptions with these tools. RSS is the right tool for this problem and I would be surprised if their CMS doesn't produce a feed on its own.

reply
I don't use a container. I use a VM.

I'm not totally naive. I had the VM fairly hardened originally, but it proved to be inconvenient. I relaxed it so that processes on the VM can see other devices on the network.

There's definitely some risk to that.

reply