Skip to content

ducminh79/diffusion_traffic_gen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

News

  • (July 2025) We released ImagenFew code: https://github.com/azencot-group/ImagenFew
  • (May 2025) We announce our new model - ImagenFew. Technical report TL;DR - ImagenFew is a unified diffusion-based generative framework that can synthesize high-fidelity time series across diverse domains using just a few examples.
  • (September 2024) We are happy to announce that the paper has been accepted to NeurIPS2024
  • (November 2024) Conditional benchmarking is now available for all datasets.

Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series (ImagenTime)

TS2IMG samples

ℹ️ Overview

This project presents a novel approach to generative modeling of time series data by transforming sequences into images. Our method effectively handles both short and long sequences and supports various tasks, including unconditional generation, interpolation, and extrapolation. By leveraging invertible transforms such as delay embedding and the short-time Fourier transform, we create a unified framework that processes varying-length time series with high efficiency and accuracy.

We welcome you to use our code and benchmark to develop new methods and applications for time series data. Our model can serve as a strong baseline for comparison and evaluation of new models.

Setup

Download and set up the repository:

git clone https://github.com/azencot-group/ImagenTime.git
cd ImagenTime

We provide a requirements.yaml file to easily create a Conda environment configured to run the model:

conda env create -f requirements.yaml
conda activate ImagenTime

📊 Data

For your convenience, we provide the data along with the necessary code to load it, all in a single zip file. Please download the zip file from the following here: unzip_to_data

https://drive.google.com/drive/folders/11PXAj0RYei5MyXJVasikmYnEDK6V8awt?usp=share_link

Then, unzip the file into the project's empty /data folder. That's it! All unzipped datasets are already preprocessed according to the specified protocols.

  • Short datasets:

    • Unconditional Generation: Energy, MuJoCo, Stocks, Sine.
    • Conditional generation: ETTh1, ETTh2, ETTm1, ETTm2.
  • Long datasets:

    • Unconditional Generation: FRED-MD, NN5 Daily, Temp Rain.
    • Conditional generation: Physionet, USHCN.
  • Ultra-long datasets:

    • Unconditional Generation: Traffic, KDD-Cup.
    • Conditional generation: Traffic, KDD-Cup.

If you use these datasets, please cite the sources as referenced in our paper.

🚀 Usage

We include three main scripts to perform different tasks:

💡For convenience, we provide configuration files for each task and dataset under the ./configs directory.

For Training and Evaluation of Unconditional Generation:

python run_unconditional.py --config ./configs/unconditional/<desired_dataset>.yaml

For Training and Evaluation of Conditional Generation:

python run_conditional.py --config ./configs/conditional/<interpolation or extrapolation>/<desired_dataset>.yaml

Visualization Metrics (t-SNE, PCA, etc.):

Note that the visualization script expects a trained model, so you must run the training scripts first.

python run_visualization.py --config ./configs/unconditional/<desired_dataset>.yaml

BibTeX

@article{naiman2024utilizing,
  title={Utilizing image transforms and diffusion models for generative modeling of short and long time series},
  author={Naiman, Ilan and Berman, Nimrod and Pemper, Itai and Arbiv, Idan and Fadlon, Gal and Azencot, Omri},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={121699--121730},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages