Computer use (to anthropic, as in the article) is an LLM controlling a computer via a video feed of the display, and controlling it with the mouse and keyboard.
I guess that's one way to get around robots.txt. Claim that you would respect it but since the bot is not technically a crawler it doesn't apply. It's also an easier sell to not identify the bot in the user agent string because, hey, it's not a script, it's using the computer like a human would!
> hundreds of tasks across real software (Chrome, LibreOffice, VS Code, and more) running on a simulated computer. There are no special APIs or purpose-built connectors; the model sees the computer and interacts with it in much the same way a person would: clicking a (virtual) mouse and typing on a (virtual) keyboard.
> where the model interacts with the GUI (graphical userinterface) directly.
If the ultimate goal is having a LLM control a computer, round-tripping through a UX designed for bipedal bags of meat with weird jelly-filled optical sensors is wildly inefficient.
Just stay in the computer! You're already there! Vision-driven computer use is a dead end.
So at this point we're talking about participating in the (very old) arms race between scrapers & content providers.
If enough people want agents, then services should (or will) provide agent-compatible APIs. The video round-trip remains stupid from a whole-system perspective.