Skip to content

Published in Nature Communications: Bridging known and unknown dynamics by transformer-based machine-learning inference from sparse observations

License

Notifications You must be signed in to change notification settings

Zheng-Meng/Dynamics-Reconstruction-ML

Repository files navigation

Bridging known and unknown dynamics by transformer-based machine-learning inference from sparse observations

Aug 28, 2025. Bridging known and unknown dynamics by transformer-based machine-learning inference from sparse observations has been published in Nature Communications!

This repo is for our preprint Reconstructing dynamics from sparse observations with no training on target system, where dynamics be faithfully reconstructed from the limited observations without any training data. This framework provides a paradigm of reconstructing complex and nonlinear dynamics in the extreme situation where training data does not exist and the observations are random and sparse.

We address this challenge by developing a hybrid transformer and reservoir-computing machine-learning scheme. For a complex and nonlinear target system, the training of the transformer can be conducted not using any data from the target system, but with essentially unlimited synthetic data from known chaotic systems. The trained transformer is then tested with the sparse data from the target system. The output of the transformer is further fed into a reservoir computer for predicting the long-term dynamics or the attractor of the target system.

Simulation guidance

Download the time series data of all chaotic systems from Zenodo and move them to the 'chaos_data' folder. You can also generate the chaotic data by running save_chaos.py. To proceed with the machine learning code, either download the data and move it to the 'chaos_data' folder or generate the data yourself.

Run chaos_transformer_read_and_test.py to evaluate the trained model on testing (target) systems, with sequence length $L_s=2000$ and sparisty $S_m=0.8$. It is important to note that the model has not encountered the testing systems during training. An example of reconstructed chaotic foodchain system is shown below:

Afterward, execute rc_prediction.py to use the Transformer-reconstructed data to make both short-term and long-term predictions:

In addition to using our pre-trained model, we also provide the script chaos_transformer_train.py for readers to train the transformer themselves. After training, please ensure that the 'save_file_name' variable in chaos_transformer_read_and_test.py is updated to match your saved model file name, so that your own trained model is used during testing. A Jupyter notebook is provided in the examples folder to help readers understand and reproduce the code workflow.

Furthermore, readers may generate more diverse synthetic systems for training, which can enhance reconstruction performance on previously unseen target systems during testing. The performance increases with respect to the diversity of training systems k:

More information

  • For more information about the reservoir hyperparameters optimization, you can find from my GitHub page.

If you have any questions or suggestions, feel free to reach out.

Cite our work

@article{zhai2025bridging,
  title={Bridging known and unknown dynamics by transformer-based machine-learning inference from sparse observations},
  author={Zhai, Zheng-Meng and Stern, Benjamin D and Lai, Ying-Cheng},
  journal={Nature Communications},
  volume={16},
  number={1},
  pages={8053},
  year={2025},
  publisher={Nature Publishing Group UK London}
}

About

Published in Nature Communications: Bridging known and unknown dynamics by transformer-based machine-learning inference from sparse observations

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages