Ensure CUDA 12.6 symbolic links are ok. Eg include should point to targets/x86_64-linux/include
/opt/cuda/cuda-12.6/include -> targets/x86_64-linux/include
10th December cuspatial now compiles, links, builds
and run gtests ok.
Compile, link and install build using
setenv CUSPATIAL_HOME `pwd`/cuspatial setenv PATH /opt/cuda/cuda-12.6/bin/:"$PATH" setenv LD_LIBRARY_PATH /opt/cuda/cuda-12.6/lib64:"$LD_LIBRARY_PATH" echo "Since not restricting the number of threads crashed the machine, do specify -j1" echo "One thread only, this will be slow (3+ hours)" setenv PARALLEL_LEVEL "-j1" cd $CUSPATIAL_HOME chmod +x ./build.sh echo "NB fixed CUDA 12.6 installation. Put build into xxx/cuspatial" ./build.sh clean libcuspatial cuspatial tests '--cmake-args="-DCMAKE_INSTALL_PREFIX=xxx/cuspatial"'Still fails at end with
no such option: --config-settingsbut has compiled and built and put installation into xxx/cuspatial
cd xxx/cuspatial/cpp/build/gtests/Run in turn each of the 54 *_TEST executable programs and check they each say their tests have PASSED (approx 1 minute in total).
Remainder of section on build.sh should only be of historical interest. However example.cu still fails to compile.
Section namespace describes an initial compilation problem (reported in bug report #562). This section describes problems and work arounds to build cuspatial on a Linux desktop with a NVIDIA GeForce RTX 4070 Ti SUPER (compute level 8.9) using CUDA 12.6, version 11.5.0 of the GNU gcc C++ compiler and cmake 3.26.5
git clone cuspatial or download https://github.com/rapidsai/cuspatial/archive/refs/heads/branch-24.12.zip
In our CUDA installation, we set environment variables PATH and LD_LIBRARY_PATH to point to CUDA 12.6 (You may also want to set PATH and LD_LIBRARY_PATH to use your favourite version of the GCC compiler but they were set appropriately for the most recent version of g++ so here we do not need to do that.)
Summary
(2 Dec 2024, still incomplete:
cuspatial and cuproj Python packages not installed?
still get "::" compilation error on example.cu)
setenv PATH /opt/cuda/cuda-12.6/bin/:"$PATH" setenv LD_LIBRARY_PATH /opt/cuda/cuda-12.6/lib64:"$LD_LIBRARY_PATH" unzip cuspatial-branch-24.12.zip mv cuspatial-branch-24.12 cuspatial setenv CUSPATIAL_HOME `pwd`/cuspatial setenv EXTRA_CMAKE_ARGS "-DCMAKE_INSTALL_PREFIX=~/assugi/cuproj/cuspatial -DCUDAToolkit_INCLUDE_DIR=/opt/cuda/cuda-12.6/targets/x86_64-linux/include" setenv PARALLEL_LEVEL "-j1" cd $CUSPATIAL_HOME && chmod +x ./build.sh && ./build.sh libcuspatial cuspatial tests
Perhaps in future we could use build.sh's --allgpuarch (build for all supported GPU architectures) to compile cuspatial on a computer without its own GPU?
-- Unable to find cuda_runtime.h in "/opt/cuda/cuda-12.6/include" for CUDAToolkit_INCLUDE_DIR. -- Unable to find cublas_v2.h in either "" or "/server/opt/math_libs/include"
setenv EXTRA_CMAKE_ARGS "-DCUDAToolkit_INCLUDE_DIR=/opt/cuda/cuda-12.6/targets/x86_64-linux/include"
CMake Error at build/_deps/proj-src/cmake/CMakeLists.txt:110 (file): file RELATIVE_PATH called with incorrect number of argumentscmake command file(RELATIVE_PATH gives an error because it needs three arguments but is given only two. On investigation, the third argument ${CMAKE_INSTALL_PREFIX} is null (hence the error).
setenv EXTRA_CMAKE_ARGS "-DCMAKE_INSTALL_PREFIX=~/assugi/cuproj/cuspatial"(we expanded ~ to give a complete path name).
./build.sh clean libcuspatial cuspatial tests '--cmake-args="-DCUDAToolkit_ROOT=/opt/cuda/cuda-12.6"'Note use of ' to enclose whole of --cmake-args=
setenv PARALLEL_LEVEL "-j1"${PARALLEL_LEVEL} is explicitly used by cuspatial-branch-24.12/build.sh
Note CMAKE_BUILD_PARALLEL_LEVEL did not work and was ignored:
CMake Warning: Manually-specified variables were not used by the project: CMAKE_BUILD_PARALLEL_LEVEL
nvcc compiler (v12.6) error message:
../cuspatial-branch-24.12/cpp/_deps/rmm-src/include/rmm/mr/device/device_memory_resource.hpp(312): error: name followed by "::" must be a class or namespace name friend void get_property(device_memory_resource const&, cuda::mr::device_accessible) noexcept {}This is an awful error message. It seems (see forums.developer) that the problem is trying to compile cuproj code for a graphics card whose compute level does not support cuproj. (The rapidsai documentation says the minimum compute level is 6.0.)
Specifying suitable compute level on nvcc command line
nvcc .. example.cu -gencode=arch=compute_60,code=sm_60(and/or actual compute level of your GPU) did not work