upvote
https://x.com/rauchg/status/2045995362499076169

> A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using.

> Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments.

> We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration.

> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.

Still no email blast from Vercel alerting users, which is concerning.

reply
> Still no email blast from Vercel alerting users, which is concerning.

On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams.

But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information.

reply
> On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams

This is not how things work. In a crisis like this there is a war room with all stakeholders present. Doesn’t matter if it’s Sunday or 3am or Christmas.

And for this company specifically, Guillermo is not one to defer to comms or legal.

reply
> the CEO can't just write a mass email without approval from legal or other comms teams.

They can be brought in to do their job on a Sunday for an event of this relevance. They can always take next Friday off or something.

reply
Has anyone actually gotten an email from Vercel confirming their secrets were accessed? Right now we're all operating under the hope (?) that since we haven't (yet?) gotten an email, we're not completely hosed.
reply
Hope-based security should not be a thing. Did you rotate your secrets? Did you audit your platform for weird access patterns? Don’t sit waiting for that vercel email.
reply
Of course rotated. But we don't even know when the secrets were stolen vs we were told, so we're missing a ton of info needed to _fully_ triage.
reply
nope...I feel u, the "Hope-based security" is exactly what Vercel is forcing on its users right now by prioritizing social media over direct notification.

If the attacker is moving with "surprising velocity," every hour of delay on an email blast is another hour the attacker has to use those potentially stolen secrets against downstream infrastructure. Using Twitter/X as a primary disclosure channel for a "sophisticated" breach is amateur hour. If legal is the bottleneck for a mass email during an active compromise, then your incident response plan is fundamentally broken.

reply
I'm going down with the ship over on X.com the Everything App. There's a parcel of very important tech people that are running some playbook where posting to X.com is sufficient enough to be unimpeachable on communication, despite its rather beleaguered state and traffic.
reply
The actual app name would be good to have. Understandable they don’t want to throw them under the bus but it’s just delaying taking action by not revealing what app/service this was.
reply
I was trying to look it up (basically https://developers.google.com/identity/protocols/oauth2/java... -- the consent screen shows the app name) but it now says "Error 401: invalid_client; The OAuth client was not found." so it was probably deleted by the oauth client owner.
reply
reply
deleted
reply
Makes it even more relevant to have the actual app or vendor name - who’s to say they just removed it to save face and won’t add it later?
reply
I don’t understand why they can’t just directly name the responsible app as it will come out eventually.
reply
reply
deleted
reply
Which itself was the subject of a broader compromise as far as i can tell
reply
Maybe legal red tape?
reply
Yes. The oauth ID is indisputable. It it seems to be context.ai. But suppose it was a fake context.ai that the employee was tricked into using. Or… or…

Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through.

reply
They might be buying time to sell the relevant stock
reply
It looks like the app has already been deleted
reply
[flagged]
reply
This was a Google oauth app and it was phished. So... No.
reply
Idk exactly how to articulate my thoughts here, perhaps someone can chime in and help.

This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks.

Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct?

reply
This isn't a web development concept. It's the unix philosophy of "write programs that do one thing and do it well" and interconnect them, being taken to the extremes that were never intended.

We need a different hosting model.

reply
Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.

Instead of "programs that do one thing and do it well", "write programs which are designed to be used together" and "write programs to handle text streams", I might go with a foundational philosophy like "write programs that are do not trust the user or the admin" because in applications connected to the internet, both groups often make mistakes or are malicious. Also something like "write programs that are strict on which inputs they accept" because a lot of input is malicious.

reply
It's not a hosting model, it's a fundamental failure of software design and systems engineering/architecture.

Imagine if cars were developed like websites, with your brakes depending on a live connection to a 3rd party plugin on a website. Insanity, right? But not for web businesses people depend on for privacy, security, finances, transportation, healthcare, etc.

When the company's brakes go out today, we all just shrug, watch the car crash, then pick up the pieces and continue like it's normal. I have yet to hear a single CEO issue an ultimatum that the OWASP Top 10 (just an example) will be prevented by X date. Because they don't really care. They'll only lose a few customers and everyone else will shrug and keep using them. If we vote with our dollars, we've voted to let it continue.

reply
> We need a different hosting model.

There really isn't an option here, IMO.

1. Somebody does it

2. You do it

Much happier doing it myself tbh.

reply
In my mind the unix philosophy leads to running your cloud on your own hardware or VPS's, not this.
reply
exactly this, write - not use some sh*t written by some dude from Akron OH 2 years ago”
reply
That's why I wrote my own compiler and coreutils. Can't trust some shit written by GNU developers 30 years ago.

And my own kernel. Can't trust some shit written by a Finnish dude 30 years ago.

And my own UEFI firmware. Definitely can't trust some shit written by my hardware vendor ever.

reply
Yeah definitely no difference between GNU coreutils and some vibe coded AI tool released last month that wants full oAuth permissions.
reply
I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both.

The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.

Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.

reply
I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service."

You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.

And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.

reply
It's such a bad faith argument, they basically make false equivalencies with LLMs and other software. Same with the "AI is just a higher level compiler" argument. The "just" is doing a ton of heavy lifting in those arguments.

Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].

In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:

- Make it easy to write, test, and run programs.

- Interactive use instead of batch processing.

- Economy and elegance of design due to size constraints ("salvation through suffering").

- Self-supporting system: all Unix software is maintained under Unix.

In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to

- Write programs that do one thing and do it well.

- Write programs to work together.

- Write programs to handle text streams, because that is a universal interface.

This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.

Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!

0: https://en.wikipedia.org/wiki/Not_even_wrong

reply
So it’s not a binary thing, there’s context and nuance?
reply
Embrace the suck.
reply
TempleOS, is that you?
reply