upvote
The main drawback is that you will need to be able to patch quick in the next 3-5 years. We are already seeing this in a few solutions getting attention from various AI-driven security topics and our previous stance of letting fixes "ripen" on the shelf for a while - a minor version or two - is most likely turning problematic. Especially if attackers start exploiting faster and botnets start picking up vulnerabilities faster.

But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.

It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.

It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.

reply