> A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using.
> Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments.
> We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration.
> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.
Still no email blast from Vercel alerting users, which is concerning.
On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams.
But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information.
This is not how things work. In a crisis like this there is a war room with all stakeholders present. Doesn’t matter if it’s Sunday or 3am or Christmas.
And for this company specifically, Guillermo is not one to defer to comms or legal.
They can be brought in to do their job on a Sunday for an event of this relevance. They can always take next Friday off or something.
If the attacker is moving with "surprising velocity," every hour of delay on an email blast is another hour the attacker has to use those potentially stolen secrets against downstream infrastructure. Using Twitter/X as a primary disclosure channel for a "sophisticated" breach is amateur hour. If legal is the bottleneck for a mass email during an active compromise, then your incident response plan is fundamentally broken.
Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through.
This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks.
Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct?
We need a different hosting model.
Instead of "programs that do one thing and do it well", "write programs which are designed to be used together" and "write programs to handle text streams", I might go with a foundational philosophy like "write programs that are do not trust the user or the admin" because in applications connected to the internet, both groups often make mistakes or are malicious. Also something like "write programs that are strict on which inputs they accept" because a lot of input is malicious.
Imagine if cars were developed like websites, with your brakes depending on a live connection to a 3rd party plugin on a website. Insanity, right? But not for web businesses people depend on for privacy, security, finances, transportation, healthcare, etc.
When the company's brakes go out today, we all just shrug, watch the car crash, then pick up the pieces and continue like it's normal. I have yet to hear a single CEO issue an ultimatum that the OWASP Top 10 (just an example) will be prevented by X date. Because they don't really care. They'll only lose a few customers and everyone else will shrug and keep using them. If we vote with our dollars, we've voted to let it continue.
There really isn't an option here, IMO.
1. Somebody does it
2. You do it
Much happier doing it myself tbh.
And my own kernel. Can't trust some shit written by a Finnish dude 30 years ago.
And my own UEFI firmware. Definitely can't trust some shit written by my hardware vendor ever.
The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.
Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.
You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.
And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.
Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].
In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:
- Make it easy to write, test, and run programs.
- Interactive use instead of batch processing.
- Economy and elegance of design due to size constraints ("salvation through suffering").
- Self-supporting system: all Unix software is maintained under Unix.
In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to
- Write programs that do one thing and do it well.
- Write programs to work together.
- Write programs to handle text streams, because that is a universal interface.
This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.
Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!