But in practice, it's basically impossible to use that way in conjunctions with workers, since you have to bind every database you want to use to the worker and binding a new database requires redeploying the worker.
But even without network issues that have plagued it I would hesitate to build anything for production on it because it can't even do transactions and the product manager for D1 openly stated they wont implement them [0]. Your only way to ensure data consistency is to use a Durable Object which comes with its own costs and tradeoffs.
https://github.com/cloudflare/workers-sdk/issues/2733#issuec...
The basic idea of D1 is great. I just don't trust the implementation.
For a hobby project it's a neat product for sure.
How did you work around this problem? As in, how do you monitor for hung queries and cancel them?
> D1 reliability has been bad in our experience.
What about reads? We use D1 in prod & our traffic pattern may not be similar to yours (our workload is async queue-driven & so retries last in order of weeks), nor have we really observed D1 erroring out for extended periods or frequently.
No downtime snapshots would be the best but I'd be quite happy with a blocking backup on a set schedule that can be set from the GUI / from the cli / from a config file. Its a huge PITA having to play 'trust me bro' to clients and their admins with custom workers and backups.
I currently stream it D1 dump -> worker(encrypt w/ key wrapping) -> R2 on a schedule, then have a container spin up once a day and create changesets from the dumps. An external tool pulls the dumps and changesets.
Cloudflare seems to be building for lock-in and I don't love it. I especially don't understand how you build an OpenRouter and only have bindings for your custom runtime at launch.
Yes, you can see the same "hosted" ones on there, but when you look at the models endpoint, there are much less options at the "workers-ai/*" namespace. Is that intentional?
Thanks for the feedback, and good catch. Looks like that endpoint is pulling from a slightly out of date data source. The docs/dashboard currently are the best resources for the full catalog, but we'll update that API to match.
[1] https://developers.cloudflare.com/ai/models/
[2] https://developers.cloudflare.com/ai-gateway/features/unifie...
We'll be adding prices to the docs and the model catalog in the dashboard shortly.
In short: currently the pricing matches whatever the provider charges. You can buy unified billing credits [1] which charges a small processing fee.
> Finally, would be great if this could return OpenAI AND Anthropic style completions.
Agreed! This will be coming shortly. Currently we'll match the provider themselves, but we plan to make it possible to specify an API format when using LLMs.
[1]: https://developers.cloudflare.com/ai-gateway/features/unifie...
I love everything about openrouter. So kinda a fan boy.
rant aside, they are greatly positioned network wise to offer this service, i wonder about their princing and potential markup on top of token usage?
i presume they wont let you “manage all your AI spend in one place” for free.
Of course they will. In return they get to control who they’re routing requests to. I wouldn’t be surprised if this turns I to the LLM equivalent of “paying for order flow”.
edit: Why downvote? It's correct, and it's a risk that competitors handle better, including for their CDN products (compared to Bunny CDN). Maybe you are just used to the risk and haven't felt the burn yourself yet. Or you have the mistaken notion that there is no price at which temporary downtime is worthwhile to avoid paying.
Speaking of:
https://news.ycombinator.com/item?id=47787042
I really hope that person gets a resolution from Cloudflare that doesn't financially ruin them.
I immediately pulled all my sites off of Cloudflare and I will never use that godawful nightmare of a company for anything ever again. If they can't even host a generic help bot without screwing it up that badly, why would I ever use them for anything at all, never mind an AI platform?