Skip to content

[pull] develop from freqtrade:develop#1699

Merged
pull[bot] merged 3 commits intoUncodedtech:developfrom
freqtrade:develop
Mar 25, 2026
Merged

[pull] develop from freqtrade:develop#1699
pull[bot] merged 3 commits intoUncodedtech:developfrom
freqtrade:develop

Conversation

@pull
Copy link
Copy Markdown

@pull pull bot commented Mar 25, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

Briarion and others added 3 commits March 25, 2026 15:24
Add optional early stopping to prevent overfitting in PyTorch-based
FreqAI models. When `early_stopping_patience` is set in
model_training_parameters, training will stop if validation loss
does not improve for the specified number of epochs.

Changes:
- Add `early_stopping_patience` parameter (default 0 = disabled)
- `estimate_loss()` now returns average loss (float | None) instead
  of None, enabling downstream use for schedulers and early stopping
- Track best validation loss and patience counter across epochs

Usage in config:
```json
{
  "model_training_parameters": {
    "n_epochs": 100,
    "early_stopping_patience": 10
  }
}
```

The change is fully backward compatible - early stopping is disabled
by default, and the return value of estimate_loss() can be safely
ignored by existing subclasses.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Document the new early_stopping_patience trainer_kwargs parameter
in the FreqAI parameter table, including description, datatype,
default value, and usage notes.
feat: add early stopping support to PyTorchModelTrainer
@pull pull bot locked and limited conversation to collaborators Mar 25, 2026
@pull pull bot added the ⤵️ pull label Mar 25, 2026
@pull pull bot merged commit a203f41 into Uncodedtech:develop Mar 25, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants