upvote
> for normie agents to take off in the way that you expect, you're going to have to grant them with full access

At this point it's a foregone conclusion this is what users will choose. It'll be like (lack of) privacy on the internet caused by the ad industrial complex, but much worse and much more invasive.

The threats are real, but it's just a product opportunity to these companies. OpenAI and friends will sell the poison (insecure computing) and the antidote (Mythos et all) and eat from both ends.

Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.

I don't want this, I just think it's going down that route.

reply
Their solution will be to push mandatory and nonconsensual updates to your devices which limit your device and your freedom in the name of security. Like Google is doing to Android in September. You will no longer be able to install "unverified" software on anything. To address prompt injection attacks they're probably working on an approach where your data all has to be in the cloud and subject to security scans. That's already basically the model for Google Workspace, Google Drive and Chromebooks.

The model will get full access to your data, but in the name of security, you will only be permitted to have data that is cloud-hosted; local storage will effectively just be cache.

The era of the general computer will end, and the products you purchased from these companies will be nonconsensually altered and limited.

I'm so glad I switched to Linux more than a decade ago. At least on the PC there will still be an open source ecosystem for a long time to come, it may have less features but I'm willing to accept that.

Knowing that they can change what you bought overnight with a single nonconsensual update, think very, very carefully about who you purchase all of your future technology from. Google's upcoming nonconsensual degradation of Android should be a lesson for everybody.

reply
>Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.

As a proud neo-luddite, I'm watching the AI hype with grim amusement and I'll tell you hwhat, it doesn't look like a good time. Even putting to one side the planetary scale economic crash that is incoming, all the hypers seem to be on some sort of treadmill that is out of their control and it simply doesn't look like fun.

reply
> It'll be like (lack of) privacy on the internet caused by the ad industrial complex, but much worse and much more invasive.

The concerning aspect is how others' content being scanned into systems don't have any knowledge or consent. Having private PII/files/code/emails/etc being read and/or accidentally shared by the agent online.

reply
> Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.

Honestly, it's alright.

Just think of what we could do with computers up until this point. We keep all those abilities.

And more, even, because the industry still keeps churning out new local LLMs. So you even gain more capabilities than right now. Just not at the rate of the bleeding edge.

Which is just like the Linux desktop, essentially. It's fine, really. There is no need to consume the bleeding edge. You will be fine.

reply
There was a recent Stanford study which showed that AI enthusiasts and experts and the normies had very different sentiment when it came to AI.

I think most people are going to say they dont want it. I mean, why would anyone want a tool that can screw up their bank account? What benefit does it gain them?

Theres lots of cases of great highly useful LLM tools, but the moment they scale up you get slammed by the risks that stick out all along the long tail of outcomes.

reply
I agree, in general we are going to find that ultimately most employee end users don't want it. Assuming it actually makes you more productive. I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase? You're just giving away that value to your employer.

On the other hand, entrepreneurs and managers are going to want it for their employees (and force it on them) for the above reason.

reply
It's interesting how differently people can think.

I couldn't imagine thinking "I'm gonna do this 0.1x as fast as I could, wasting my life away with pointless extra work, to spite my employer"

reply
If everyone becomes 10x more productive it won’t mean the companies cash flow 10x’s. Where value is loose there is competition, so in theory everyone should win. Unless nobody else can compete to capture that loose 10x value, in which case congratulations, you are now a unicorn.

Of course in reality in the short term what happens is companies lay off people to increase margins. Times will be tough for workers, and equity keeps gravitating towards those who already had it.

reply
> I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase? You're just giving away that value to your employer.

Those are productivity increases that got our standard of living to where it is. Fewer people doing the same amount of work has, historically speaking, freed people from their current job, allowing them to work on something else.

It's that analogy of the horse, they used to be farm animals. Now, fewer of them are 'employed' but they're much nicer jobs. I'm not sure if the same is true for us this time around though as new jobs being created have increasingly been highly skilled which means the majority can't apply.

reply
There was a long and great ravine of suffering between the advent of the Industrial Revolution and our time of bounty.
reply
I dont see companies doing that. it can be business ending. only AI bros buying mac mini in 2026 to setup slop generated Claws would do that but a company doing that will for sure expose customer data.
reply
Big companies are exposing customer data all the time, and they are doing all fine. The more criminal negligence, the richer.
reply
> For all the benefits that agents offer, they can be asymmetrically harmful. This is not a solved issue.

Strongly agreed.

I saw a few people running these things with looser permissions than I do. e.g. one non-technical friend using claude cli, no sandbox, so I set them up with a sandbox etc.

And the people who were using Cowork already were mostly blind approving all requests without reading what it was asking.

The more powerful, the more dangerous, and vice versa.

reply
How many of these threat vectors are just theoretical? Don’t use skills from random sources (just like don’t execute files from unknown sources). Don’t paste from untrusted sites (don’t click links on untrusted sites). Maybe there are fake documentation sites that the agent will search and have a prompt injected - but I haven’t heard of a single case where that happened. For now, the benefits outweigh the risk so much that I am willing to take it - and I think I have an almost complete knowledge of all the attack vectors.
reply
Systems have been caught out that review pull requests, that’s a simple and clear one. The more obvious to me for most people is anything you do that interacts with your email without an explicit approve list of emails to read.
reply
i think you lack creativity. you could create a site that targets a very narrow niche, say an upper income school district. build some credibility, get highly ranked on google due to niche. post lunch menus with hidden embedded text.

the attack surface is so wide idk where to start.

reply
Why would my agent retrieve that lunch menu?
reply
[dead]
reply