Now, as other commenter pointed out, maybe this is just inherent complexity in this space. But more secure defaults could go a long way making this more secure in practice.
Update: now I've finished reading the article, my impression is that complexity is mostly inherent to this problem space. I'd be glad to be proven wrong, though!
Releases go to the release webhook, which should output nothing and ideally should be a separate machine/VM with firewall rules and DNS blocks that prevent traffic to anywhere not strictly required.
Things are a lot harder to secure with modern dynamic infrastructure, though. Makes me feel old, but things were simpler when you could say service X has IP Y and add firewall rules around it. Nowadays that service probably has 15 IP addresses that change once a week.
There’s no single repository or curated packages as is typical in any distribution: instead actions pull other actions, and they’re basically very complex wrapper around scripts which downloads binaries from all over the place.
For lots of very simple actions, instead of installing a distribution package and running a single command, a whole “action” is used which creates and entire layer of abstraction over that command.
It’s all massive complexity on top of huge abstractions, none of which were designed with security in mind: it was just gradually bolted on top over the years.
You build it, I build it, we get the same hash. It allows anyone to prove a published binary is a faithful compilation of given input source code.
As a practical step, one could try using webhooks to integrate their github repo with literally any other CI provider. This would at least give you a single, low-coupling primitive to build your workflows on. It would not, in any way, eliminate the domain's inherent complexity (secrets, 3rd party contributions, trusted publishing, etc.), but it starts out safe because by default it doesn't do anything - it's just an HTTP call that gets fired under certain conditions.
Not many of them allow for immutable relases. And if they do, nothing blocks you from releasing a patch version that will most likely be automatically pulled in by many many projects during build.
The whole dependencies ecosystem is currently broken. Thats why its so easy (relatively) to attack via supply-chain.
Only way to be really secured is to have own registry of vetted dependencies pinned to exact version and maintain own upgrade pipeline.
NOONE (beside google) is going to do that. Its too costly, you need two big teams just to handle that one part.
And yet my team and I at stagex are building a decentralized code review system to handle this anyway. Not waiting around with our fingers crossed for the corpos to solve supply chain security for us. Has to be a community led effort.