@@ -88,8 +88,32 @@ description in :ref:`json-format-label`.
8888 to execute memory-intensive threads. Therefore, it is crucial to ensure that
8989 :code: `stress-ng ` is installed on all worker nodes.
9090
91+
92+ Dask
93+ ++++++++
94+ `Dask <https://www.dask.org/ >`_ is an open-source library for parallel computing
95+ in Python. It makes it possible to easily implement and execute workflows local machines, HPC cluster schedulers, and cloud-based
96+ and container-based environments. Below, we provide an example on how to generate
97+ workflow benchmark for running with Dask::
98+
99+ import pathlib
100+
101+ from wfcommons import BlastRecipe
102+ from wfcommons.wfbench import WorkflowBenchmark, DaskTranslator
103+
104+ # create a workflow benchmark object to generate specifications based on a recipe
105+ benchmark = WorkflowBenchmark(recipe=BlastRecipe, num_tasks=500)
106+
107+ # generate a specification based on performance characteristics
108+ benchmark.create_benchmark(pathlib.Path("/tmp/"), cpu_work=100, data=10, percent_cpu=0.6)
109+
110+ # generate a Dask workflow
111+ translator = DaskTranslator(benchmark.workflow)
112+ translator.translate(output_folder=pathlib.Path("./dask-wf/""))
113+
91114Nextflow
92115++++++++
116+
93117`Nextflow <https://www.nextflow.io/ >`_ is a workflow management system that enables
94118the development of portable and reproducible workflows. It supports deploying workflows
95119on a variety of execution platforms including local, HPC schedulers, and cloud-based
@@ -117,29 +141,16 @@ workflow benchmark for running with Nextflow::
117141 that depend on another instance of the same abstract task. Thus, the translator
118142 fails when you try to translate a workflow with iterations.
119143
120- Dask
121- ++++++++
122- `Dask <https://www.dask.org/ >`_ is an open-source library for parallel computing
123- in Python. It makes it possible to easily implement and execute workflows local machines, HPC cluster schedulers, and cloud-based
124- and container-based environments. Below, we provide an example on how to generate
125- workflow benchmark for running with Dask::
126-
127- import pathlib
128-
129- from wfcommons import BlastRecipe
130- from wfcommons.wfbench import WorkflowBenchmark, DaskTranslator
131-
132- # create a workflow benchmark object to generate specifications based on a recipe
133- benchmark = WorkflowBenchmark(recipe=BlastRecipe, num_tasks=500)
134-
135- # generate a specification based on performance characteristics
136- benchmark.create_benchmark(pathlib.Path("/tmp/"), cpu_work=100, data=10, percent_cpu=0.6)
137-
138- # generate a Dask workflow
139- translator = DaskTranslator(benchmark.workflow)
140- translator.translate(output_folder=pathlib.Path("./dask-wf/""))
141-
144+ .. note ::
145+
146+ If you plan to run Nextflow on an HPC system using Slurm, we **strongly
147+ recommend ** using the `HyperQueue <https://github.com/It4innovations/hyperqueue >`_
148+ executor. HyperQueue efficiently distributes workflow tasks across all allocated
149+ compute nodes, improving scalability and resource utilization.
142150
151+ The :class: `~wfcommons.wfbench.translator.nextflow.NextflowTranslator `
152+ class includes functionality to automatically generate a Slurm script
153+ template for running the workflow on HPC systems.
143154
144155Pegasus
145156+++++++
@@ -175,6 +186,31 @@ for running with Pegasus::
175186 the :code: `lock_files_folder ` parameter when using
176187 :meth: `~wfcommons.wfbench.bench.WorkflowBenchmark.create_benchmark `.
177188
189+ PyCOMPSs
190+ ++++++++
191+
192+ `PyCOMPSs <https://compss.bsc.es/ >`_ is a programming model and runtime that
193+ enables the parallel execution of Python applications on distributed computing
194+ infrastructures. It allows developers to define tasks using simple Python
195+ decorators, automatically handling task scheduling, data dependencies, and
196+ resource management.. Below, we provide an example on how to generate workflow
197+ benchmark for running with PyCOMPSs::
198+
199+ import pathlib
200+
201+ from wfcommons import CyclesRecipe
202+ from wfcommons.wfbench import WorkflowBenchmark, PyCompssTranslator
203+
204+ # create a workflow benchmark object to generate specifications based on a recipe
205+ benchmark = WorkflowBenchmark(recipe=CyclesRecipe, num_tasks=200)
206+
207+ # generate a specification based on performance characteristics
208+ benchmark.create_benchmark(pathlib.Path("/tmp/"), cpu_work=500, data=1000, percent_cpu=0.8)
209+
210+ # generate a PyCOMPSs workflow
211+ translator = PyCompssTranslator(benchmark.workflow)
212+ translator.translate(output_folder=pathlib.Path("./pycompss-wf/"))
213+
178214Swift/T
179215+++++++
180216
0 commit comments