Being able to ask questions to an LLM copilot about patterns across several billion rows, 140GB of time series data, like how the market reacts 15 minutes, 1 day, and 1 week after Federal Reserve press releases, using time_bucket() and other aggregation functions with compressed chunks, and having the copilot generate the queries and run them fast, is going to be game changing. I can ask to write a script to scrape the press releases, to run them through another agent to classify them (hawk, dove) and use that information in the time series analysis.
Even though that exploration has not produced any actionable, tradable results for me, which is why I am sharing it, eventually TimescaleDB will be one of the most important tools in fields like medical research. A researcher will be able to ask in natural language for statistically significant patterns in the data, and LLM agents will query it quickly while taking advantage of all the TimescaleDB optimizations.