8 Comments
Jun 15Liked by Nikos Kafritsas

Hi, thank you very much for sharing such an interesting source. Have you guys provided the TTMQ model detail? I didn't see any of them in your GitHub. Curious about that.

Expand full comment
author
Jun 15·edited Jun 15Author

Hello Jackson, thank you for your feedback. I am not an author of this model, I am just writing in-depth articles on cool time-series models.

You can find the code of TTQM here: https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/tinytimemixer

I also have a hands-on project on TTQM in the AI Projects Folder https://aihorizonforecast.substack.com/p/ai-projects (Project #4)

Expand full comment
Jun 9Liked by Nikos Kafritsas

Very interesting. Is the pre training time series dataset in any way related to the evaluation data sets? For zero shot or others?

Expand full comment
author

No, and thanks for the question I forgot to mention it in the article.

Expand full comment
Jun 10Liked by Nikos Kafritsas

Very interesting. Thanks for the reply. So does this suggest that pre training in some dataset for something like weather, can be leveraged in other type of time series, like financial? How unrelated can the dataset be, I wonder

Expand full comment
author
Jun 10·edited Jun 10Author

True, this seems counterintuitive at first, but it works in practice. The model was pretrained on 1 billion datapoints from diverse datasets, including financial data, so it has learned to recognize general patterns.

The key detail here is that the model processes data not as individual datapoints but as patches (windows), thereby preserving local semantic information across longer histories.

Additionally, this model can accommodate higher context lengths (up to 1536), making it ideal if you are performing high-frequency trading ( high-frequency data in general).

If you are not familiar with patching, I explain better it in this article: https://aihorizonforecast.substack.com/p/timesfm-googles-foundation-model

Expand full comment
Jun 11Liked by Nikos Kafritsas

Thank you so much for the detailed explanation. Indeed it sounds counterintuitive. Perhaps the key is in the diversity of the pre-training set as you mentioned. I am a new subscriber so I haven’t checked your other articles however I liked this so far. Will have a look at the patching you shared, thanks!

Expand full comment
author

You're welcome!

Expand full comment