Skip to content

Conversation

@ohearnk
Copy link
Collaborator

@ohearnk ohearnk commented Jan 20, 2026

This PR fixes build support for Cray toolchains with AMD GPUs. Specifically, this PR is a work-in-progress for running QUICK on the SDSC Cosmos system (AMD 300A).

Currently, the single GPU (HIP) code builds and runs successfully with the following commands:

# allocate job on and log in to main cluster node using Slurm
module swap PrgEnv-cray/8.5.0 PrgEnv-gnu-amd/8.5.0
module load craype-accel-amd-gfx942
module unload cray-mpich/8.1.30 cray-libsci/24.07.0   # TODO: fix MPI+HIP builds with Cray MPI and math libs
cmake .. -DCMAKE_BUILD_TYPE=RELEASE -DOPTIMIZE=TRUE -DCOMPILER=MANUAL -DCMAKE_C_COMPILER=amdclang -DCMAKE_CXX_COMPILER=amdclang++ -DCMAKE_Fortran_COMPILER=gfortran-13 -DCMAKE_C_FLAGS="-O2" -DCMAKE_CXX_FLAGS="-O2" -DCMAKE_Fortran_FLAGS="-O2 -ftree-vectorize -funroll-loops -ffast-math" -DMPI=FALSE -DHIP=TRUE -DENABLEF=FALSE -DQUICK_USER_ARCH=gfx942 -DFORCE_INTERNAL_LIBS="lapack;blas" -DCMAKE_INSTALL_PREFIX=${PWD}/../install_master_PrgEnv-gnu-amd8.5.0_gcc-native13.2.0_rocm6.3.0 &> cmake_configure.log
cmake --build . --verbose &> cmake_build.log
cmake --install .

Merge this only once simplified Cray builds are fixed (via the -DCOMPILER=CRAY CMake variable) and other build issues are resolved (MPI+HIP builds with Cray MPI, Cray math libs, etc.). Note also that support for Cray builds with upstream Ambertools need to be synchronized (hence, patches may need to be upstreamed there for HIP support).

@ohearnk ohearnk self-assigned this Jan 20, 2026
@ohearnk ohearnk added enhancement New feature or request Bug fix labels Jan 20, 2026
@ohearnk ohearnk added this to the QUICK-25.08 milestone Jan 20, 2026
@vtripath65 vtripath65 requested review from vtripath65 and removed request for vtripath65 January 23, 2026 02:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Bug fix enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant