There is a GPU-accelerated version of UMAP provided by the RAPIDS cuML library. The original implemention, umap-learn, is CPU only, [says the docs](https://umap-learn.readthedocs.io/en/latest/faq.html?utm_source=chatgpt.com#is-there-gpu-or-multicore-cpu-support). **(Confirm this. Numba supposedly can use CUDA GPUs, so why not?)** Some docs: - Aug 27, 2024 [RAPIDS 24.08: Better scalability, performance, and CPU/GPU interoperability](https://medium.com/rapids-ai/rapids-24-08-better-scalability-performance-and-cpu-gpu-interoperability-f88086386da6) - Oct 31, 2024 [Even Faster and More Scalable UMAP on the GPU with RAPIDS cuML](https://developer.nvidia.com/blog/even-faster-and-more-scalable-umap-on-the-gpu-with-rapids-cuml/)