-
Notifications
You must be signed in to change notification settings - Fork 10
docs: #993 - [M4-P11] Jupytext Migration: Simulations - Cloud Chamber (2 long-running notebooks) #1010
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Convert 2 Cloud Chamber simulation notebooks to .py:percent format for Jupytext paired sync workflow. Apply linting fixes and execute with outputs. - NEW: Cloud_Chamber_Single_Cycle.py - Jupytext paired sync file - NEW: Cloud_Chamber_Multi_Cycle.py - Jupytext paired sync file - MODIFIED: Both .ipynb files executed with outputs - MODIFIED: pyproject.toml - Added E402 and S101 per-file ignores for docs/Examples/**/*.py Execution times: - Cloud_Chamber_Single_Cycle: 290s (60-300s tier → CI only) - Cloud_Chamber_Multi_Cycle: 500s (>300s tier → nightly CI) Linting fixes applied: - Converted assert statements to if/raise pattern (S101) Closes uncscode#993 ADW-ID: 67e25a87
Validation fixes: - Synced both notebooks to align .py and .ipynb content - Executed Cloud_Chamber_Single_Cycle.ipynb (282s execution time) - Executed Cloud_Chamber_Multi_Cycle.ipynb (482s execution time) - Both notebooks run successfully with no errors Execution times indicate these should be excluded from pre-commit and CI (>300s threshold), suitable for nightly CI only. ADW-ID: 67e25a87
Update documentation to reflect completion of M4-P11 Jupytext migration phase for Cloud Chamber notebooks. Changes to M4-jupytext-full-migration.md: - Mark M4-P11 phase as Completed (2026-01-31) - Update all task checkboxes to complete - Add completion notes with LOC counts and recommendations - Update execution time table (both notebooks >300s, nightly CI only) - Update phase summary table status - Add changelog entry Changes to documentation_guide.md: - Expand Jupytext paired coverage section - Add comprehensive list of migrated categories - Include Simulations/Notebooks in paired directories - Add bash command to list all paired .py files Closes uncscode#993 ADW-ID: 67e25a87
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR converts two long-running Cloud Chamber simulation notebooks from the Simulations directory to Jupytext paired sync format (.py:percent) as part of the M4-P11 maintenance plan. The notebooks demonstrate cloud activation-deactivation cycles using kappa-Köhler theory for hygroscopic growth, with execution times profiled to determine appropriate CI handling strategies.
Changes:
- Added Jupytext paired sync
.pyfiles for two Cloud Chamber notebooks (single-cycle: 465 LOC, multi-cycle: 1804 LOC) - Updated pyproject.toml to add E402 and S101 linter exceptions for notebook files
- Updated documentation guide and M4 maintenance plan to reflect completion of Phase 11
Reviewed changes
Copilot reviewed 5 out of 7 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| pyproject.toml | Added E402 (imports after non-code) and S101 (assert usage) exceptions for docs/Examples notebooks |
| Cloud_Chamber_Single_Cycle.py | New Jupytext paired file for single activation-deactivation cycle tutorial (~92s execution) |
| Cloud_Chamber_Multi_Cycle.py | New Jupytext paired file for multi-cycle comparison across 3 seed scenarios (~545s execution) |
| adw-docs/documentation_guide.md | Added Simulations/Notebooks to the list of paired notebook categories |
| adw-docs/dev-plans/maintenance/M4-jupytext-full-migration.md | Updated M4-P11 status to completed with execution times and handling strategy |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| |----------|----------------|----------| | ||
| | Cloud_Chamber_Single_Cycle.ipynb | TBD (P11) | | | ||
| | Cloud_Chamber_Multi_Cycle.ipynb | TBD (P11) | | | ||
| | Cloud_Chamber_Single_Cycle.ipynb | >300s (P11) | Nightly CI only | |
Copilot
AI
Jan 31, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent execution time categorization for Cloud_Chamber_Single_Cycle.ipynb. According to the PR description, the execution time is approximately 92 seconds, which falls in the 60-300 seconds range. This should be handled as "Exclude from pre-commit, include in CI", not "Nightly CI only" (which is for >300s notebooks). Update the execution time table to show "~92s (P11)" with handling "CI only" instead of ">300s (P11)" with "Nightly CI only".
| # Verify trend: larger particles (higher index) activate earlier or at same time | ||
| # Allow 5s tolerance for simulation artifacts | ||
| for i in range(len(as_act_times) - 1): | ||
| if as_act_times[i] < float("inf") and as_act_times[i + 1] < float("inf"): | ||
| assert as_act_times[i] >= as_act_times[i + 1] - 5, ( | ||
| f"Size-dependent activation violated: particle {i} " | ||
| f"(d={as_initial_diams[i] * 1e9:.0f}nm) " | ||
| f"activated at {as_act_times[i]}s but particle {i + 1} " | ||
| f"(d={as_initial_diams[i + 1] * 1e9:.0f}nm) " | ||
| f"activated at {as_act_times[i + 1]}s" |
Copilot
AI
Jan 31, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect assumption in validation logic. The code assumes particles are ordered by size ("larger particles (higher index)"), but particles are sampled from a lognormal distribution (line 323) without any sorting, so there's no guarantee of size ordering. The validation loop checks that particle i activates before or at the same time as particle i+1, but this only makes sense if particles are sorted by size. Either sort the particles by initial diameter before validation, or change the validation to check that correlation between size and activation time is negative (larger particles activate earlier) without assuming ordering.
| # Verify trend: larger particles (higher index) activate earlier or at same time | |
| # Allow 5s tolerance for simulation artifacts | |
| for i in range(len(as_act_times) - 1): | |
| if as_act_times[i] < float("inf") and as_act_times[i + 1] < float("inf"): | |
| assert as_act_times[i] >= as_act_times[i + 1] - 5, ( | |
| f"Size-dependent activation violated: particle {i} " | |
| f"(d={as_initial_diams[i] * 1e9:.0f}nm) " | |
| f"activated at {as_act_times[i]}s but particle {i + 1} " | |
| f"(d={as_initial_diams[i + 1] * 1e9:.0f}nm) " | |
| f"activated at {as_act_times[i + 1]}s" | |
| # Verify trend: when sorted by size, larger particles activate earlier or at same time | |
| # Allow 5s tolerance for simulation artifacts | |
| size_order = np.argsort(as_initial_diams) # ascending: smaller -> larger | |
| for smaller_idx, larger_idx in zip(size_order[:-1], size_order[1:]): | |
| if ( | |
| as_act_times[smaller_idx] < float("inf") | |
| and as_act_times[larger_idx] < float("inf") | |
| ): | |
| assert as_act_times[smaller_idx] >= as_act_times[larger_idx] - 5, ( | |
| f"Size-dependent activation violated: particle {smaller_idx} " | |
| f"(d={as_initial_diams[smaller_idx] * 1e9:.0f}nm) " | |
| f"activated at {as_act_times[smaller_idx]}s but larger particle {larger_idx} " | |
| f"(d={as_initial_diams[larger_idx] * 1e9:.0f}nm) " | |
| f"activated at {as_act_times[larger_idx]}s" |
| ) | ||
|
|
||
| # Build particle representation with lognormal-sampled particles | ||
| chamber_volume = 0.25 # m^3 (from chamber_dims) |
Copilot
AI
Jan 31, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent chamber volume definition. The chamber dimensions are defined as (2.0, 1, 1) on line 84, which calculates to a volume of 2.0 m³ (as shown on line 85-86). However, on line 200, the chamber_volume is redefined as 0.25 m³ with a comment claiming it's "from chamber_dims". This creates an inconsistency where the wall loss strategy uses a 2.0 m³ chamber, but the particle representation uses a 0.25 m³ volume. Either use the calculated volume from line 85 or update the chamber_dims to (1.0, 0.5, 0.5) to match the 0.25 m³ volume used in the multi-cycle notebook.
| chamber_volume = 0.25 # m^3 (from chamber_dims) |
|
|
||
| # RH targets (used to set initial gas concentrations at phase boundaries) | ||
| rh_activation_target = 1.004 # 100.4% RH - supersaturated | ||
| rh_deactivation_target = 0.85 # 85% RH - subsaturated |
Copilot
AI
Jan 31, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discrepancy between documentation and code for deactivation RH. The markdown on line 38 states "one deactivation (65% RH)", but the code on line 257 uses "rh_deactivation_target = 0.85" which is 85% RH. Update either the documentation to match the code (85% RH) or the code to match the documentation (0.65 for 65% RH).
| # Apply dilution to particle masses | ||
| # Note: dilution effect is minimal and condensation dominates, | ||
| # so we skip applying diluted mass back to aerosol. | ||
| # The apply_particle_dilution() function is validated in Section 18.1. |
Copilot
AI
Jan 31, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dead code: the dilution_coefficient parameter is accepted but never used. The apply_particle_dilution function is defined (line 123) and validated (line 1214), but the comment on lines 215-217 explicitly states "dilution effect is minimal and condensation dominates, so we skip applying diluted mass back to aerosol." This creates confusion because the parameter is passed through multiple function calls but never actually used. Either remove the dilution_coefficient parameter and the unused apply_particle_dilution function, or actually apply the dilution in the simulation loop. If dilution is truly negligible, consider removing it entirely to simplify the code.
| # Apply dilution to particle masses | |
| # Note: dilution effect is minimal and condensation dominates, | |
| # so we skip applying diluted mass back to aerosol. | |
| # The apply_particle_dilution() function is validated in Section 18.1. | |
| # Apply dilution to particle masses before other dry-phase processes. | |
| # Note: dilution effect is typically small compared to condensation, | |
| # but we include it here for completeness and consistency with | |
| # apply_particle_dilution() as validated in Section 18.1. | |
| aerosol = apply_particle_dilution(aerosol, dilution_coefficient, dt) |
Target Branch:
mainFixes #993 | Workflow:
67e25a87Summary
Converts 2 long-running Cloud Chamber simulation notebooks from the Simulations directory to Jupytext paired sync format (
.py:percent). These notebooks simulate cloud activation-deactivation cycles using kappa-Köhler theory for hygroscopic growth. Execution times were profiled to determine the appropriate handling strategy for CI.What Changed
New Components
docs/Examples/Simulations/Notebooks/Cloud_Chamber_Single_Cycle.py(~465 LOC) - Jupytext paired sync file for single activation-deactivation cycle tutorialdocs/Examples/Simulations/Notebooks/Cloud_Chamber_Multi_Cycle.py(~1804 LOC) - Jupytext paired sync file for multi-cycle comparison across 3 seed scenarios (AS, Sucrose, Mixed)Modified Components
docs/Examples/Simulations/Notebooks/Cloud_Chamber_Single_Cycle.ipynb- Synced with .py file, executed to generate outputsdocs/Examples/Simulations/Notebooks/Cloud_Chamber_Multi_Cycle.ipynb- Synced with .py file, executed to generate outputsadw-docs/dev-plans/maintenance/M4-jupytext-full-migration.md- Updated M4-P11 status to completed, added execution timesadw-docs/documentation_guide.md- Added Cloud Chamber notebooks to long-running notebooks sectionpyproject.toml- Updated version or configurationHow It Works
The Jupytext paired sync workflow enables editing notebooks as plain Python files:
Notebook Content Overview
Single Cycle Notebook:
Multi Cycle Notebook:
Execution Time Profiling
Handling thresholds applied:
< 60s: Include in pre-commit hook60-300s: Exclude from pre-commit, include in CI> 300s: Nightly CI job only (exclude from PR CI)Implementation Notes
np.random.seed(100)for deterministic resultsTesting
--check-syncpasses for both notebooks (files are in sync)ruff checkpasses with no errors on both .py filesReferences
adw-docs/dev-plans/maintenance/M4-jupytext-full-migration.mdadw-docs/documentation_guide.md(Jupytext Paired Sync Workflow)