upvote
we did some internal tests. The quality isn't bad, it works quite well. But it's essentially on the same level of an ARIMA model trained on the data just much bigger and slower.

So in my opinion it currently falls into a kind of void. If your use case is worth predicting and you put a data scientist on it, you're better off just training cheaper ARIMA models.

reply
That is disappointing. One would say that with all the budget and compute, Google would be able to create something that beats methods from 70s. Maybe we are hitting some hard limits.

Maybe it would be better to train an LLM with various tuning methodologies and make a dedicated ARIMA agent. You throw in data, some metadata and requested window of forecast. Out comes parameters for "optimal" conventional model.

reply
I think this could be an interesting read for you, I read it last week and it kind of argues the same points: https://shakoist.substack.com/p/against-time-series-foundati...
reply
thanks for sharing.

i met an associate working for a particular VC and they were really into time series foundational models. I argued the most of the "Why real forecasting problems break the whole frame" as to why they were wasting their time at that time.

she was totally convinced i was wrong because she was discussing investing with some top and well respected researchers that were really pushing this and wanted to make a startup around it.

i was and am still confused as at all the wishful thinking. then again, sometimes the best time to sell an idea is right before you think it is possible.

reply