But the implementation is comically awful.
Sure, you can "just write natural language" instructions and hope for the best.
But they couldn't fully get away from their old demons and you still have to pay the YAML tax to set the necessary guardrails.
I can't help but laugh at their example: https://github.com/github/gh-aw?tab=readme-ov-file#how-it-wo...
They wrote 16 words in Markdown and... 19 in YAML.
Because you can't trust the agent, you still have to write tons on gibberish YAML.
I'm trying to understand it, but first you give permissions, here they only provide read permissions.
And then give output permissions, which are actually write permissions on a smaller scope than the previous ones.
Obviously they also absolve themselves from anything wrong that could happen by telling users to be careful.
And they also suggest to setup an egress firewall to avoid the agents being too loose: https://github.com/github/gh-aw-firewall
Why setting-up an actual workflow engine on an infra managed by IT with actual security tooling when you can just stick together a few bits of YAML and Markdown on Github, right?
We've fixed the example on the README to be a link, it should be clearer now what's going on.
because helping you isn't the goal
the goal is to generate revenue by consuming tokens
and a never ending swarm of "AI" "agents" is a fantastic way to do that
If I had a nice CI/CD workflow that was built into GitHub rather than rolling my own that I have running locally, that might just make it a little more automatic and a little easier.
The sensible case for this is for delivering human-facing project documentation, not actual code. (E.g. ask the AI agent to write its own "code review" report after looking at recent commits.) It's implemented using CI/CD solutions under the hood, but not real CI/CD.