Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The test data is 300k different time series. There’s no way to do an arima in a reasonable amount of time and/or money on that volume of data


Eh it's not as if you could just project down the 300k time series to something lower dimensional for forecasting. The TimeGPT would have to do something similar to avoid the same problem.

Though I can't quite figure out how the predicting works exactly, they have a lot of test series but do they input all of them simultaneously?


Really? And they could do LSTMs?

Even if true, they could take a random subset of size 100 out of the 300k and compare on those.


ARIMA is very very slow and computational expensive.

>Even if true, they could take a random subset of size 100 out of the 300k and compare on those.

Sure...but there's a chance that ARIMA won't even finish training on that subset either.


It doesn’t matter.

If you write a paper and exclude comparisons to state of the art, this what happens.

They could have done something, and didn’t.

“It’s hard so we didn’t” isn’t an excuse, it’s just a lack of rigor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: