upvote
This is cool. I think for our use case this wouldn’t work. We’re dealing with billions of rows for some tenants.

We’re about to introduce alerts where users can write their own TRQL queries and then define alerts from them. Which requires evaluating them regularly so effectively the data needs to be continuously up to date.

reply
Billions still seems crunchable for DDB. It’s however much you can stuff into your RAM no? Billions is still consumer grade machine RAM depending on the data. Trillions I would start to worry. But you can have a super fat spot instance where the crunching happens and expose a light client on top of that then no?

Quadrillions, yeah go find yourself a trino spark pipeline

reply
deleted
reply
Is that why it’s called DuckDb? Because data ponds?
reply
The DuckDB website has the following to say about the name:

> Why call it DuckDB?

> Ducks are amazing animals. They can fly, walk and swim. They can also live off pretty much everything. They are quite resilient to environmental challenges. A duck's song will bring people back from the dead and inspires database research. They are thus the perfect mascot for a versatile and resilient data management system.

https://duckdb.org/faq#why-call-it-duckdb

reply
Idk but I named everything in the related code "duckpond" :)
reply
Hannes (one of the creators) had a pet duck
reply
How large are these data volumes? How long does it take to prepare the data when a customer request comes in?
reply
Small. We're dealing with financial accounts, holdings and transactions. So a user might have 10 accounts, thousands of holdings, 10s of thousands of transactions. Plus a handful of supplemental data tables. Then there is market data that is shared across tenants and updated on interval. This data is maybe 10-20M rows.

Just to clarify, the data is prepared when the user (agent) analytics session starts. Right now it takes 5-10s, which means it's typically ready well before the agent has actually determined it needs to run any queries. I think for larger volumes, pg_duckdb would allow this to scale to 10s of millions rows pretty efficiently.

reply
How do you copy all the relevant data? Doesn't this create unnecessary load on your source DB?
reply
We have various data sources (which is another benefit of this approach). Data from the application DB is currently pulled using the FE apis which handle tenant isolation and allow the application database to deal with the load. I think pg_duckdb could be a good solution here as well, but haven't gotten around to testing it. Other data come from analytics DB. Most of this is landed on an interval via pipeline scripts.
reply