It'd be great if the Wrangler CLI could display the required API token permissions upfront during local dev, so you know exactly what to provision before deploying. Even better if there were something like a `cf permissions check` command that tells you what's missing or unneeded perms with an API key.
So I prefer to dig through the source of truth, which also helps me build my mental model around the problem.
But I’ve moved to using https://aep.dev style APIs as much as possible (sometimes written with TypeSpec), because the consistency allows you to use prebaked aepcli or very easily write your own since everything behaves like know “resources” with a consistent pattern.
Also Terraform works out of the box, with no needing to write a provider.
Currently you can only enforce zone-based permissions (domain based) BUT plenty of resources, such as workers, don’t belong to zones so essentially their code can be replaced or deleted with the lowest level permission. And there’s no way to block it…
Alternatively if you could please allow us to create multiple accounts that share a single super account (for SSO and such), similar to GitHub Enterprise which has Enterprises and Organisations. Then we could have ACME Corp. and ACME Corp (Prod) and segregate the two and resource groups wouldn’t be strictly required.
The only reason we can’t right now is because SSO can’t exist for multiple accounts at once.
The cf permissions check idea from the top comment is great. One thing I've found is that agents are surprisingly good at using CLIs but terrible at diagnosing why a command failed. Clear error messages with the exact fix ("missing scope X, run cf token add --scope X") matter way more for agent usability than the happy path.
A couple of obvious questions - Is it open source (npmjs side doesn't point to repo)? And in general will it be available as a single binary instead of requiring nodejs tooling to install/use? If so, using recently-acquired Bun or another product/approach?
No long lived tokens, or at least a very straightforward configuration to avoid them.
One option: an easy tool to make narrowly scoped, very short lived tokens, in a file, and maybe even a way to live-update the file (so you can bind mount it).
Another option: a proxy mode that can narrow the scope. So I set it up on a host, then if I want to give a container access to one domain or one bucket or whatever, I ask the host CLI to become a proxy that gives the relevant subset of its permissions to the proxied client, and I connect the container to it.
For example, we're dragging our feet on Github Config as Terraform at our org so in the meantime I've been using Claude + the gh cli to deploy changes across repos. I don't need to know / remember the gh cli command to pull or push a ruleset, or script a loop in Bash, I just have to say
> Claude pull the ruleset from <known good repo> and push it to <repo 1>, <repo 2>, <repo 3>
The CLI is also nice because it abstracts away authentication. I have another flow which doesn't have a CLI and Claude is more than happy to interpolate the API key that it read from a config file into the chat history (horrifying).
Or I just go, "I just created this cloudflare domain. Deploy the site via gh actions to cloudlare pages whenever I push to main. here are my credentials; put them in Github secrets." Or something similarly high level.
The clever thing here is not doing things manually but make sure you generate automation and scripts if you are going to do the same things more often along with skill files detailing how and when to call shit.
> Increasingly, agents are the primary customer of our APIs. Developers bring their coding agents to build and deploy applications, agents, and platforms to Cloudflare, configure their account, and query our APIs for analytics and logs.
> We want to make every Cloudflare product available in all of the ways agents need.
I'm confused though, why isn't that tool/framework being shown here. What is it and how does it work? It is similar to the TypeSpec tool someone else posted?
Initial impression:
-h and --help should follow the short / long standard of providing more / less info. The approach currently used is -h and --help show command lists and point at a --help-full flag. The --help-full output seems to give what I'd expect on -h. This needs to be much better - it should give enough information that a user / coding agen doesn't have to read websites / docs to understand how the feature works.
Completions are broken by default compared to the actual list of commands - i.e. dns didn't show up in the list.
When I ran cf start -h it prompted to install completions (this was odd because completions were already installed / detected). But either way, -h should never do anything interactive
Some parts of the cli seem very different to the others (e.g. cf domains -h is very different to cf dns -h). Color / lack of color, options, etc.
The short version is for typing on the fly, and the long version is for scripts, they should have identical output.
The full thorough documentation should be in man, and/or info.
I agree with your point that most flags should generally treat short versions as exact aliases to long flags, but I just think that a convention that treats -h and --help as concise vs long is 100% reasonable. The distinction is often breadth vs depth.
Having them be different could cause someone to look at -h, and not even know about --help. Or if someone writes a script parsing the output of -h for some reason, someone else might come along and change it to --help expecting it to be the same thing.
This convention existed before clap came into being, but I don't recall when I first saw it. I have been using the command line for just shy of 40 years across various operating systems.
You can just make a `--help-all` (or whatever word you want to use), imo the `--help-all` command doesn't need a short equivalent because it's not something you'd frequently use.
It's not cosmetic. Uniform help is a way to not let agents hallucinate. Otherwise you end up with invalid commands, or worse, silent ones that go through without doing anything at all, or go totally wrong.
I'd like the ability to create scoped, short-lived tokens from the CLI itself. There's an open GitHub issue (13042) for this.
But, there needs to be a twist: Tokens should be sociable not just by resource type, but by specific resource ID and action.
One thing I'm curious about: Cloudflare uses TypeScript for Workers and now this CLI, but Rust for the actual edge runtime. Is there a rough heuristic the team uses internally for when TS wins vs when you reach for something else?
https://github.com/danielgtaylor/huma https://github.com/go-fuego/fuego
The restish tool by the author of Huma is functionally correct, but I'm finding the models are not doing a great job at inferring the syntax. Admittedly I am having a hard time following the syntax too.
https://github.com/rest-sh/restish
I need to do proper evals, but it makes me wonder if `curl` or a CLI with more standard args / opts parsing will work better.
Thanks to Cloudflare for sharing their notes, anyone else figure this out?
A very welcome development - much better for machines to the APIs - but it always would have been welcome without AI.
I have few domains on Cloudflare and when making some changes, I wish there were a way to apply the same changes to multiple domains for consistency.
CLI preview for UI action will make it possible.
This is only partly about the CLI and mostly about the API itself, but a straightforward and consistent way to manage environments would be nice.
I have a project using CF workers and Astro, with an ugly but working wrangler.toml defining multiple environments. When Cloudflare acquired Astro, I assumed that would be a good thing, but the latest version of the Cloudflare plugin (mandatory if you want to use the latest Astro) seems to manage environments in its own special incompatible way.
Previous-co could never get argo billing to match argo analytics, and with no support from CF over months we backed away from CF completely in fear that scale-up would present new surprise unknown/untraceable costs
Previous-previous-co is probably the largest user of web worker
Cloudflare is quietly rebuilding their entire developer surface with agents as the primary consumer, not humans?
If you like Rust so much, I think you should just completely refactor it.
it was magical
Tools should be tested and quality assured. Something that was utterly missing for cloudflare's unusable v5 terraform provider. Quality over quantity with a ux that has humans in mind!
No, the customers never mattered but the mythical "LLM agent" is vitally important to cater too.
Clearly everything is retro
Nobody else here ever spent years begging in pull requests for some basic functionality or bug to be fixed, and it never could be, because someone in the company decided they didn't have the time, or didn't think your feature was needed, or decided it wasn't a bug?
How about, has anyone ever had to pin multiple versions of a tool and run the different versions to get around something broken, get back something obsoleted, or fix a backwards-incompatibility?
> you can install it globally by running npm install -g cf
...I'm gonna vibe-code my own version as independent CLI tools in Go, I hope ya'll realize. Besides the security issues, besides the complexity, besides the slowness, I just don't want to be smothered by the weight of a monolith that isn't controlled by a community of users. Please keep a stable/backward-compatible HTTP API so this less difficult? And if Terraform providers have taught us anything, it's that we need copious technical and service documentation to cover the trillion little edge cases that break things. If you can expose this documentation over your API, that will solve a ton of issues.
Please call it flare.
https://github.com/cloudflare/cloudflare-go/tree/v0/cmd/flar...
Seems odd to me. I guess we all live in our bubbles.
If there is some fancy tool out there, "does it have binding for language X"? X seems to be much more commonly Python than Typescript.
I wish we would stop building CLIs and instead use something like this:
Node, Python etc. allow arbitrary footgun tech to lose all local data. You have to use better tech.
am I the only one put off with such language? they talk as if they invented compilers or assembly or Newton's law of gravity.
Why didn't they vibe code support for more? With this on the heels of EmDash, and this being a technical preview, it feels inconsistent.
This scares me more than Im able to admit, typescript sucks and in my opinion its way worse than the more commonly used lingua franca of computing, which I would attribute to C. At least C can be used to create shared objects i guess?
I used to dislike JavaScript a lot after learning it and PHP, then using languages like C#. Then TypeScript came along making JS much easier to live with, but has actually become quite nice in some ways.
If you use deno as your default runtime, it's almost Go-like in its simplicity when you don't need much. Simple scripts, piping commands into the REPL, built-in linting, testing, etc. It's not that bad!
Of course you're welcome to your opinion and we'd likely agree about a lot of what's wrong with it, but I guess I feel a bit more optimistic about TS lately. The runtime is improving, they've got great plans for it, it's actually happening, and LLMs aren't bad at using it either. It's a decent default for me.
Coming from typing systems that are opinionated, first class citizens of their languages, it doesn’t stand up.
You look at libraries like Effect, and it's genuinely incredible work, but you can't help feeling like... Man, so many languages partially address these problems with first-class primitives and control flow tooling.
I'm grateful for their work and it's an awesome project, but it's a clear reflection of the deficiencies in the language and runtime.
With Go you can compile binaries with bindings for other binaries, like duckdb or sqlite or so on. With deno or bun, you're out of luck. It's such a drag. Regardless, it's been quite useful at my work to be able to send CLI utilities around and know they'll 'just work'. I maintain a few for scientific data processing and gardening (parsing, analysis, cleaning, etc) which is why the lack of duckdb bundling is such a thorn. I do wish I could use Go instead and pack everything directly into the binary.
I think the binaries wind up being somewhere around 70mb. That's insane, but these are disposable tools and the cost is negligible in practice.
I'm not sure why; I guess it's because the web itself is already really flexible that I find that the types don't really buy me a lot since I have to encode that dynamism into it.
To be clear, before I get a lecture on type safety and how wonderful you think types are and how they should be in everything: I know. I like types in most languages. I didn't finish but I was doing a PhD in formal methods, and specifically in techniques to apply type safety to temporal logic. I assure you that I have heard all your reasoning for types before.
There's so much momentum behind it from the front-end community alone it's not going anywhere.
IMO using Typescript sucks because of the node ecosystem/npm. The language itself is passable.
The performance is that bad that the typescript developers are rewriting the language itself in Go. [0]
Tells me everything I need to know about how bad typescript is from a performance stand point.
[0] https://devblogs.microsoft.com/typescript/typescript-native-...
Even with Bun it's because of Zig, not TypeScript and that only proves my point even more.
why does a CLI tool that just wraps APIs need this native performance?