Freezing-of-Gait (FoG) is one of the most disabling symptoms of Parkinson’s disease.
LNN-FoGNet investigates whether Liquid Neural Networks (LNNs)—recurrent models whose neurons adapt their own time-constants—can match or exceed the accuracy of both LSTM and continuous-time RNN (CTRNN) while remaining smaller, faster, and more energy-efficient for round-the-clock wearables.
- Dataset: Daphnet FoG
- Models compared: Liquid Neural Network (LNN / LTC) | Long Short-Term Memory (LSTM) | Continuous-Time RNN (CTRNN).
- Key results:
- Mean F1 ≈ 0.95 on 5-fold subject-wise Cross-Validation (CV).
- LNN converges in ≈½ the epochs and 1/10th the training time of LSTM.
- Inference latency per step is tens-of-times faster than LSTM on the same hardware.
├── fog.py # Main training / evaluation script
├── ltc_model.py # LNN / Liquid-Time-Constant cell
├── ctrnn_model.py # CTRNN cell and helpers
├── vis.py # Training & ROC/PR visualisations
├── vis_fog_events.py # Optional: example raw‐signal plots
├── fog_data.zip # ⇣ Unzip to ./fog_data/ before training
└── results/ # Created automatically
├── fog/ # Metrics, pickles, CSVs
└── figures/ # Plots generated by vis.py
| Package | Tested version |
|---|---|
| Python | 3.9.13 |
| TensorFlow / Keras | 2.18 / 3.6 |
numpy, scikit-learn, matplotlib |
recent |
| Optional | seaborn, tqdm |
GPU: Any recent NVIDIA card with CUDA 11+ gives a big speed-up, but the code also runs on CPU.
git clone https://github.com/Jonadler1/LNN-FoGNet.git
cd LNN-FoGNet
# Optional: create an isolated env
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install core deps
pip install "tensorflow~=2.18" keras==3.6 numpy scikit-learn matplotlib seaborn tqdmInside the repo root: unzip fog_data.zip -d fog_data
fog.py expects ./fog_data/ to contain the raw .txt files exactly as provided by Daphnet.
The fog.py driver covers the entire pipeline: normalization → windowing (+ micro-segmentation) → k-fold subject-wise CV → metrics aggregation.
For example, to train an LNN with 32 hidden units for 20 epochs and 5-fold CV, use the following command in the terminal:
python fog.py --model ltc --size 32 --epochs 20 --k 5
Other options:
--model lstm | ctrnn | ltc
--size hidden units (default 32)
--epochs max epochs (default 50)
--k CV folds (default 5)
Outputs (per fold and averaged) are written to results/fog/.
Generate publication-quality plots once training is complete:
python vis.py
Those figures are saved under results/figures/.
vis_fog_events.py optionally displays raw sensor traces of FoG vs non-FoG windows.
Released under the MIT License.