You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+75-15Lines changed: 75 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,10 +22,32 @@ Intel® SHMEM provides an efficient implementation of GPU-initiated communicatio
22
22
### SYCL support <!-- omit in toc -->
23
23
Intel® oneAPI DPC++/C++ Compiler with Level Zero support.
24
24
25
-
To install Level Zero, refer to the instructions in [Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver repository](https://github.com/intel/compute-runtime/releases) or to the [installation guide](https://dgpu-docs.intel.com/installation-guides/index.html) for oneAPI users.
26
25
27
26
## Installation
28
-
Intel® SHMEM requires a host OpenSHMEM back-end to be used for host-sided operations support. In particular, it relies on a collection of extension APIs (`shmemx_heap_create`, `shmemx_heap_preinit`, and `shmemx_heap_postinit`) to coordinate the Intel® SHMEM and OpenSHMEM heaps. We recommend [Sandia OpenSHMEM v1.5.3rc1](https://github.com/Sandia-OpenSHMEM/SOS/releases/tag/v1.5.3rc1) or newer for this purpose.
27
+
28
+
### Building Level Zero
29
+
For detailed information on Level Zero, refer to the [Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver repository](https://github.com/intel/compute-runtime/releases) or to the [installation guide](https://dgpu-docs.intel.com/installation-guides/index.html) for oneAPI users.
30
+
31
+
To install, download the oneAPI Level Zero from the repository.
Intel® SHMEM requires a host OpenSHMEM or MPI back-end to be used for host-sided operations support. In particular, the OpenSHMEM back-end relies on a collection of extension APIs (`shmemx_heap_create`, `shmemx_heap_preinit`, and `shmemx_heap_postinit`) to coordinate the Intel® SHMEM and OpenSHMEM heaps. We recommend [Sandia OpenSHMEM v1.5.3rc1](https://github.com/Sandia-OpenSHMEM/SOS/releases/tag/v1.5.3rc1) or newer for this purpose. A [work-in-progress branch](https://github.com/davidozog/oshmpi/tree/wip/ishmem) of [OSHMPI](https://github.com/pmodels/oshmpi.git) is also supported but is currently considered experimental. See the [Building OSHMPI](#building-oshmpi-optional-and-experimental) section before for more details.
49
+
50
+
We recommend the Intel® MPI Library as the MPI back-end option for the current version of Intel® SHMEM. See the [Building Intel® SHMEM](#building-intel-shmem) section below for more details.
29
51
30
52
### Building Sandia OpenSHMEM (SOS)
31
53
Download the SOS repo to be configured as a back-end for Intel® SHMEM.
@@ -39,67 +61,105 @@ Build SOS following instructions below. `FI_HMEM` support in the provider is req
Please choose an appropriate PMI configure flag based on the available PMI client library in the system. Please check for further instructions on [SOS Wiki pages](https://github.com/Sandia-OpenSHMEM/SOS/wiki). Optionally, users may also choose to add `--disable-fortran` since fortran interfaces will not be used.
64
86
87
+
### Building OSHMPI (Optional and experimental)
88
+
Intel® SHMEM has experimental support for OSHMPI when built using the Intel® MPI Library.
89
+
Here is information on how to [Get Started with Intel® MPI Library on Linux](https://www.intel.com/content/www/us/en/docs/mpi-library/get-started-guide-linux/2021-11/overview.html).
Check that the SOS build process has successfully created an`<sos_dir>` directory with `include` and `lib` as subdirectories. Please find `shmem.h` and `shmemx.h` in `include`.
108
+
Check that the SOS build process has successfully created a`<shmem_dir>` directory with `include` and `lib` as subdirectories. Please find `shmem.h` and `shmemx.h` in `include`.
68
109
69
-
Build Intel® SHMEM using the following instructions:
110
+
Build Intel® SHMEM with an OpenSHMEM back-end using the following instructions:
Alternatively, Intel® SHMEM can be built by enabling an Intel® MPI Library back-end.
120
+
Here is information on how to [Get Started with Intel® MPI Library on Linux](https://www.intel.com/content/www/us/en/docs/mpi-library/get-started-guide-linux/2021-11/overview.html).
where `<back-end>` is the selected host back-end library.
96
156
97
157
-*Note:* Current supported launchers include: MPI process launchers (i.e. `mpiexec`, `mpiexec.hydra`, `mpirun`, etc.), Slurm (i.e. `srun`, `salloc`, etc.), and PBS (i.e. `qsub`).
98
158
99
159
-*Note:* Intel® SHMEM execution model requires applications to use a 1:1 mapping between PEs and GPU devices. Attempting to run an application without the ishmrun launch script may result in undefined behavior if this mapping is not maintained.
100
160
- For further details on the device selection, please see [the ONEAPI_DEVICE_SELECTOR](https://github.com/intel/llvm/blob/sycl/sycl/doc/EnvironmentVariables.md#oneapi_device_selector).
101
161
102
-
3. Validate the application ran succesfully; example output:
162
+
3. Validate the application ran successfully; example output:
103
163
104
164
```
105
165
Selected device: Intel(R) Data Center GPU Max 1550
@@ -124,7 +184,7 @@ To launch a single test, execute:
124
184
ctest -R <test_name>
125
185
```
126
186
127
-
Alternatively, all the tests in a directory (such as `test/unit/SHMEM/`) can be run with the following command:
187
+
Alternatively, all the tests in a directory (such as `test/unit/`) can be run with the following command:
128
188
129
189
```
130
190
ctest --test-dir <directory_name>
@@ -141,7 +201,7 @@ By default, a passed or failed test can be detected by the output:
141
201
To have a test's output printed to the console, add either the `--verbose` or `--output-on-failure` flag to the `ctest` command
142
202
143
203
### Available Scheduler Wrappers for Jobs Run via CTest
144
-
The following values may be assigned to `CTEST_SCHEDULER` at configure-time (ex. `-DCTEST_SCHEDULER=mpi`) to set which scheduler will be used to run tests launched through a call to `ctest`:
204
+
The following values may be assigned to `CTEST_LAUNCHER` at configure-time (ex. `-DCTEST_LAUNCHER=mpi`) to set which scheduler will be used to run tests launched through a call to `ctest`:
145
205
- srun (default)
146
206
- Launches CTest jobs on a single node using Slurm's `srun`.
0 commit comments