Installing CUDA SDK 2.1 does not automatically install the other two components. Instead the installation of CUDA SDK 2.1 can fail because CUDA SDK requires components in CUDA.
Possibly "emerge nvidia-cuda-sdk" could be used to ensure all the missing components required by CUDA SDK are down loaded and installed. However this is not necessary if the three components are installed in order.
Some of these steps have to be done as root. Eg adding drivers to the Linux kernel and changing user account privileges. Other parts might be done as a normal user but this has not be investigated.
The ebuild option merge will both fetch the required installation kit and install it. However I preferred to separate the down load and installation phases by using ebuild fetch and later ebuild merge.
ebuild /usr/portage/x11-drivers/nvidia-drivers/nvidia-drivers-180.27.ebuild fetch ebuild /usr/portage/x11-drivers/nvidia-drivers/nvidia-drivers-180.27.ebuild mergeThere are various compilation warnings and "QA Notice" but the build seems to go through ok.
(Still as root) the nVidia drivers installation said it needed the following commands:
modprobe -r nvidia eselect opengl set nvidia
ebuild /usr/portage/dev-util/nvidia-cuda-toolkit/nvidia-cuda-toolkit-2.1.ebuild fetch ebuild /usr/portage/dev-util/nvidia-cuda-toolkit/nvidia-cuda-toolkit-2.1.ebuild mergeThere is a "QA Notice" but the build seems to go through ok.
ebuild /usr/portage/dev-util/nvidia-cuda-sdk/nvidia-cuda-sdk-2.10.1215.2015.ebuild fetch ebuild /usr/portage/dev-util/nvidia-cuda-sdk/nvidia-cuda-sdk-2.10.1215.2015.ebuild mergeThere are various compilation warnings (eg for smokeParticles) and the "QA Notice" but the build seems to go through ok.
There are command scripts (eg REHL.bat) to do this but since I have only one device it seemed easier to create my own. (This needs to be run after each reboot.) The first mknod creates a Linux device for the hardware called "nvidia0". If you have more nVidia cards, the next one should be nvidia1, and so on. The last mknod creates a (pseudo) Linux device "nvidiactl" shared by all your nVidia hardware.
#!/bin/bash # #REHL.bat # much simplified by WBL 19 March 2009 # # Startup script for nVidia CUDA # # chkconfig: 345 80 20 # description: Startup/shutdown script for nVidia CUDA mknod -m 666 /dev/nvidia0 c 195 0 #mknod -m 666 /dev/nvidia1 c 195 1 #second card #mknod -m 666 /dev/nvidia2 c 195 2 #third card mknod -m 666 /dev/nvidiactl c 195 255Thats it. All should be well now.
The nVidia supplied SDK tool deviceQuery show now run and show details of your device:
/opt/cuda/sdk/bin/linux/release/deviceQuery
echo "Local: /root/boot_nvidia.bat" /root/boot_nvidia.bat #Where /root/boot_nvidia.bat is as above, except optionally add running deviceQuery. So it now looks something like:
#!/bin/bash # #REHL.bat # much simplified by WBL 19 March 2009 # # Startup script for nVidia CUDA # # chkconfig: 345 80 20 # description: Startup/shutdown script for nVidia CUDA mknod -m 666 /dev/nvidia0 c 195 0 #mknod -m 666 /dev/nvidia1 c 195 1 #second card #mknod -m 666 /dev/nvidia2 c 195 2 #third card mknod -m 666 /dev/nvidiactl c 195 255 #prove T10P is working and turn off its fan /opt/cuda/sdk/bin/linux/release/deviceQuery #