Skip to content

Releases: BerkeleyLab/fiats

Improved documentation, statistics, and reporting

06 Nov 23:13
b0c8cd2

Choose a tag to compare

New in this release

  • Updated documentation
  • Improved runtime correctness-checking via Julienne assertions
  • Multi-file aggregation of data in the tensor-statistics demonstration application
  • More portable demonstration-projects setup script
  • Improved formatting of reported execution times in demonstration applications

Merged pull requests

  • Documentation: add dependencies section to README.md by @rouson in #275
  • [JOSS review]: Fix bullet and link rendering in README by @jwallwork23 in #271
  • Add CONTRIBUTING.md first draft by @ktras in #274
  • Improved assertions: update demo to Julienne 3.2.1 assertions by @rouson in #278
  • Demo: multi-file histograms by @rouson in #279
  • Portable demo setup by @rouson in #281
  • Feature: aggregate tensor statistics across multiple training-data files by @rouson in #280
  • chore(demo): update training_data_files.json by @rouson in #282
  • doc(README.md): update compiler flags by @rouson in #283
  • Improve timer output by @rouson in #284

New Contributors

  • @jwallwork23 made their first contribution in #271

Full Changelog: 1.1.0...1.2.0

Expanded compiler support and richer test diagnostics

08 Aug 17:04
5d18e4c

Choose a tag to compare

New in this Release

  • Support for building with the Numerical Algorithms Group (nagfor) compiler Build 7235 or later 🏆
  • Support for building with the Intel OneAPI compiler (ifx) builds higher than 2025.1 (Known issue: one JSON file I/O test failure) 🚧
  • Richer diagnostic information when tests fail. 💰

Pull Requests Log

  • fix(example): correct usage output by @rouson in #266
  • Fix: make learn generic binding public by @rouson in #267 📖
  • Update compiler support documentation by @rouson in #268
  • Update to Julienne 2.4.3 and Assert 3.0.2 by @rouson in #269 🥕
  • Update to Julienne 2.4.3 assert 3.0.2 by @rouson in #270 🧅

Full Changelog: 1.0.0...1.1.0

Demo app fix & documentation updates

02 Jul 01:57
b193b2e

Choose a tag to compare

What's Changed

Full Changelog: 0.16.0...1.0.0

Make assertions removable and minor improvements

12 Mar 23:59
c256784

Choose a tag to compare

Highlights

  • Compile correctness-checking assertions only if the macro ASSERTIONS is defined'.
  • Improve example/concurrent-inferences
    • Specify OpenMP thread sharing
    • Support command-line flags for the number of trials to run and which versions of inference to run. New flags:
      • --trials <number>, where angular brackets <> denote user-provided values
      • --do-concurrent
      • --elemental
      • --openmp
      • --double-precision
  • Fix issues that move Fiats closer to supporting the Intel ifx and NAG nagfor compilers.

What's Changed

  • Fix: use a feature-based macro in test/main.F90 by @rouson in #222
  • Update single-file concatenation script by @rouson in #223
  • Automate file searches in scripts/create-single-source-file-programs.sh by @rouson in #224
  • feat(example): openmp inferences by @rouson in #225
  • chore: rm tmp file by @rouson in #226
  • Fix(example): specify add clauses to OpenMP directive in concurrent-inferences.f90 by @rouson in #228
  • Feature: make assertions removable by @rouson in #229
  • Invoke assert_{diagnose, describe} macros by @rouson in #230
  • Fix OpenMP directives in example/concurrrent-inferences.f90 by @rouson in #231
  • Disambiguate kind parameters by @rouson in #232
  • Features(demo): enable assertion removal and selective testing by @rouson in #233
  • Fix demo by @rouson in #234
  • Feature: automate concurrent-inferences example trials & print stats by @rouson in #235
  • fix(neural_net): rm block name to elim name clash by @rouson in #236
  • Fixes for building with ifx by @aury6623 in #237
  • chore(neural_network_t): make tensor_map_t components private by @rouson in #238
  • fix(trainable_network): work around nagfor bug by @rouson in #239
  • Feature: read/write/test ICAR time step data by @rouson in #240
  • fix(example): better usage info and argument handling by @rouson in #243

New Contributors

Full Changelog: 0.15.0...0.16.0

Flexible tensor reads and optional double-precision inference

29 Oct 23:35
9a277f2

Choose a tag to compare

This release offers

  • A with a new version of demo/app/train-cloud-microphysics.f90 that reads tensor component names from demo/training_configuration.json and then reads the named variables from training data files written by Berkeley Lab's ICAR fork. ☁️
  • A switch to LLVM flang as the supported compiler. (We have submitted bug reports to other compiler vendors.) 🪃
  • Optional double precision inference as demonstrated in demo/app/infer-aerosol.f90. 🔢
  • Non_overridable inference and training procedures. We are collaborating with LLVM flang developers at AMD on leveraging this feature to automatically offload parallel inference and training to graphics processing units (GPUs). 📈
  • A global renaming of this software to Fiats in all source code and documentation from 🌐

What's Changed

  • Replace gfortran in CI with flang-new by @ktras in #200
  • Support building cloud-microphysics with LLVM Flang by @ktras in #203
  • Update filenames being read in by infer_aerosol by @ktras in #204
  • Remove unallowed whitespace from project name in demo/fpm.toml by @ktras in #209
  • Fix GELU & sigmoid activation precision by @rouson in #214
  • Make all procedures involved in inference and training non_overridable by @rouson in #215
  • Simplify class relationships by @rouson in #217
  • Rename Inference-Engine to Fiats by @rouson in #218
  • Merge multi-precision support into main by @rouson in #213
  • Generalize train cloud microphysics by @rouson in #220
  • doc(README): "tensor names" in JSON configuration by @rouson in #221

Full Changelog: 0.14.0...0.15.0

Parallel training

15 Oct 04:49
03ea107

Choose a tag to compare

What's Changed

  • Update README with -Ofast flag for flang-new build notes by @ktras in #201
  • Parallel training via do concurrent by @rouson in #202

Full Changelog: 0.13.0...0.14.0

New JSON file format

14 Aug 04:15
3443f8d

Choose a tag to compare

The new file format includes

  • A file version indicator currently named acceptable_engine_tag to denote the git tag used to create the new format,
  • Better nomenclature:
    • The minima and maxima fields are now intercept and slope, respectively, to better match the function's purpose: to define a linear map to and from the unit interval.
    • The encompassing inputs_range and outputs_range objects are now inputs_map and outputs_map, 🗺️
  • A fix for cloud_microphysics/setup.sh. ☁️
  • Other minor bug fixes and enhancements.

What's Changed

  • doc(README): specify required gfortran versions by @rouson in #185
  • Enhance saturated mixing ratio example by @rouson in #189
  • Refactor tensor_map_m to improve nomenclature & move phase_space_bin_t to cloud-microphysics by @rouson in #192
  • Add git tag to JSON file to denote inference-engine version that reads and writes the format by @rouson in #194
  • Fixes #195 by @rouson in #196

Full Changelog: 0.12.0...0.13.0

Support LLVM Flang + add entropy-maximizing filter to speed convergence

21 Jul 06:00
7d564b8

Choose a tag to compare

This release adds a feature to speed convergence and adds support for a fourth compiler in addition to the GNU, NAG, & Intel compilers:

  1. All tests pass with the LLVM Flang (flang-new) compiler,
  2. The cloud-microphysics/app/train-cloud-microphysics.f90 program includes new options
    • --bins filters the training data to maximize the Shannon entropy by selecting only one data point per bin in a five-dimensional phase space,
    • --report controls the frequency of writing JSON files to reduce file I/O costs,
  3. Eliminates several warning messages from the NAG compiler (nagfor).
  4. Switches a dependency from Sourcery to Julienne to eliminate the requirement for coarray feature support,
  5. Adds the GELU activation function.
  6. Speeds up the calculation of the data needed to construct histograms.
  7. Adds a new cloud-microphysics/train.sh program to manage the training process,
  8. Adds the ability to terminate a training run based on a cost-function tolerance rather than a fixed number of epochs.

What's Changed

  • Remove second, unneeded and no longer supported build of gcc by @ktras in #150
  • build: update to sourcery 4.8.1 by @rouson in #151
  • doc(README): add instructions for auto offload by @rouson in #152
  • Work around ifx automatic-offloading bugs by @rouson in #145
  • Add bug workarounds for gfortran-14 associate-stmt bug by @ktras in #155
  • Switching from Sourcery to Julienne by @rouson in #154
  • Update fpm manifest with tag for v1.0 of dependency julienne by @ktras in #157
  • Support compilation with LLVM Flang by @ktras in #159
  • Update cloud-microphysics compiler and dependencies by @rouson in #160
  • Add GELU activation function by @rouson in #161
  • Feature: Faster histogram construction when the number of bins exceeds 80 by @rouson in #162
  • Read & perform inference on networks for which the hidden-layer width varies across layers by @rouson in #166
  • Fix/Feature(JSON): disambiguate tensor_range objects and allow flexible line-positioning of objects by @rouson in #165
  • Feature: Support JSON output for networks with varying-width hidden layers by @rouson in #167
  • Feature: filter training data for maximal information entropy via flat multidimensional output-tensor histograms by @rouson in #169
  • Features: maximize information entropy and variable reporting interval. by @rouson in #170
  • build: add single-file compile script by @rouson in #171
  • Add ubuntu to CI by @ktras in #156
  • Feature: add training script in cloud-microphysics/train.sh by @rouson in #172
  • feat(train.sh): graceful exits by @rouson in #173
  • refac(train): rm rendundant array allocations by @rouson in #174
  • feat(cloud-micro): write 1st/last cost, fewer JSON by @rouson in #175
  • feat(train.sh): add outer loop for refinement by @rouson in #176
  • feat(cloud-micro): terminate on cost-tolerance by @rouson in #177
  • Concurrent loop through each mini-batch during training by @rouson in #178
  • test(adam): reset iterations so all tests pass with flang-new by @rouson in #179
  • doc(README): add flags to optimize builds by @rouson in #180
  • fix(single-source): mv script outside fpm's purview by @rouson in #182
  • doc(README): optimize ifx builds by @rouson in #181
  • Eliminate compiler warnings by @rouson in #183
  • fix(single-file-source): respect file extension case by @rouson in #184

Full Changelog: 0.11.1...0.12.0

Selective test execution and compiler workarounds

26 Apr 00:39
007de65

Choose a tag to compare

New Feature

This release enables the selecting a subset of tests to run based on a search for substrings contained in the test output.
All test output is of the form

<Subject>
   passes on <description 1>.
   FAILS on <description 2>.

where the subject describes what is being tested (.e.g, A tensor_range_t object) and the description details how the subject is being tested (e.g., component-wise construction followed by conversion to and from JSON). The subject typically contains a type name such as tensor_range_t. The description typically does not contain a type name. Therefore, running the command

fpm test -- --contains tensor_range_t

will execute and report the outcome of all tests of the given subject, tensor_range_t, and only those tests. For test output similar to that shown above, this would display two test outcomes: one passing and one failing.

By contrast, running the command

fpm test -- --contains "component-wise construction"

would execute and report the outcome of the tests with descriptions containing component-wise construction for any subject.

This release also works around a few compiler bugs and reorders tests so that the fastest and most stable run first.

What's Changed

  • Work around ifx bug by @rouson in #142
  • Fix filename extension for file that has directives by @ktras in #143
  • feat(inference_engine_t): tensor_range_t getters (later removed) by @rouson in #147
  • Cray bug workarounds for compile time bugs by @ktras in #146
  • Feature: redesigned functionality for mapping input and output tensors to and from training ranges by @rouson in #148
  • Test reordering and selective test execution by @rouson in #149

Full Changelog: 0.11.0...0.11.1

Batch normalization, more concurrency, & NAG compiler support

04 Apr 21:36
f572e1d

Choose a tag to compare

This release adds

  • A tensor_range_t type that 🧑‍🌾 🌱
    • Encapsulates input and output tensor component minima and maxima,
    • Provides type-bound map_to_training_range and map_from_training_range procedures for mapping tensors to and from the unit interval [0,1], and 🤺
    • Provides a type-bound in_range procedure that users can employ to check whether inference input or output data involve extrapolation beyond the respective ranges employed in training.
  • BREAKING CHANGE: the network JSON file format has been updated to include input_range and output_range objects. The JSON file reader in this release may fail to read or write network files that are written or read by older versions of Inference-Engine. 🚒 🗄️ 📂
  • Automatic use of the aforementioned mapping capability during inference. 🧭
  • Enhanced concurrency to improve performance: 🐎
    • Additional use of do concurrent in the training algorithm and 🚄 🚋
    • Enabling building with OpenMP in the setup.sh script. 🏗️ 👷‍♀️
  • Additional compiler support: this is the first release that builds with the NAG Fortran compiler starting with compiler Build 7202.

What's Changed

  • Simplify app: rm redundant procedures by @rouson in #102
  • Concurrent inference example by @rouson in #103
  • Exploit additional concurrency in the training algorithm by @rouson in #105
  • feat(example): add nested do-loop inferences by @rouson in #106
  • chore(examples): match program names to file names by @rouson in #109
  • feat(infer): allow non-type-bound invocations by @rouson in #110
  • doc(README): minimum gfortran version 13 by @rouson in #111
  • Add new fpm subproject icar-trainer by @ktras in #108
  • Enable OpenMP in setup script & work around related compiler bug by @rouson in #114
  • fix(run-fpm.sh): revert to copying header into build dir by @rouson in #115
  • Remove module keyword from abstract interface by @ktras in #116
  • Compute & output tensor histograms in columnar format & add gnuplot script by @rouson in #118
  • Bugfixes for nag by @ktras in #119
  • fix(examples): .f90->.F90 to preprocess files by @rouson in #121
  • Get beyond one type of Intel bugs by @ktras in #120
  • Nagfor workaround by @rouson in #122
  • chore(test): rm nagfor compiler workaround by @rouson in #129
  • Workaround intel bug by @ktras in #128
  • doc(README): add compilers in testing instructions by @rouson in #130
  • build(fpm.toml): increment dependency versions by @rouson in #131
  • More robust Adam optimizer test by @rouson in #134
  • Ifx workarounds + train longer in Adam test to pass with nagfor by @rouson in #135
  • Store tensor ranges by @rouson in #137
  • build(random_init): rm final nagfor workaround by @rouson in #136
  • Feature: Add input/output tensor component ranges to network files by @rouson in #138
  • Feature: map input to unit range & output tensors from unit range in inference_engine_t infer procedure by @rouson in #139
  • Normalize in training by @rouson in #140
  • Fix training restarts by @rouson in #141

New Contributors

Full Changelog: 0.10.0...0.11.0