pve scripts -- https://helper-scripts.com/scripts
on the host machine extra repos need to be added. I add the following:
non-free non-free-firmware -- these are added at the end of each currently present line
deb-src http://deb.debian.org/debian/ bookworm-updates main contrib non-free non-free-firmwaredeb http://download.proxmox.com/debian/pve bookworm pve-no-subscriptionnow run the following commands:
apt update
apt install -y dkms pve-headers wget
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/550.78/NVIDIA-Linux-x86_64-550.78.run
chmod +x NVIDIA-Linux-x86_64-550.78.run
when running this next command, two yes/no prompts will display. one will mention registering something to dkms, select "no" and the other will ask if you would like to run the nvidia-xconfig utility.. blah blah, select "no" again
./NVIDIA-Linux-x86_64-550.78.run --dkms
nvidia-smi and you should see the nvidia-smi table showing details about the nvidia cardon the host machine running ls -l /dev/nvidia* should show at minimum the following:
crw-rw-rw- 1 root root 195, 0 Nov 13 14:10 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Nov 13 14:10 /dev/nvidiactl
crw-rw-rw- 1 root root 511, 0 Nov 13 14:10 /dev/nvidia-uvm
crw-rw-rw- 1 root root 511, 1 Nov 13 14:10 /dev/nvidia-uvm-tools
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 511:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
console into the lxc and repeat the steps to add the repose and install the nvidia driver with the following changes:
deb [trusted=yes] http://download.proxmox.com/debian/pve bookworm pve-no-subscription./NVIDIA-Linux-x86_64-550.78.run --no-kernel-module
nvidia-smi to confirm the install was successful and the lxc can use the gputhe next tab contains steps to make the gpu useable in a docker lxc
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
apt update -y
sudo apt install -y nvidia-container-toolkit
no-cgroups=false and set it to true no-cgroups=true. docker should now be able to utilize the gpuif wanting to use live monitoring for the gpu --
apt install nvtop
lsblk -o +SERIAL and note the serial entry of each drive you want to passcd /dev/disk/by-id then ls and confirm the drive you want to pass by matching the serial number to the drive ID (these ID's will look something like "ata-ST8000DM004-2CX188_ZSC04FW8 with the "ZSC04FW8" being the serial number) and run the following two commands:qm set VIMID --scsiX /dev/disk/by-id/DISK -- DISK being the full ID of the drive e.g. "ata-ST8000DM004-2CX188_ZSC04FW8"update vm VIMID: --scsiX /dev/disk/by-id/DISKin the commands above "--scsiX" X = number of choice per drive e.g. adding a drive "--scsi2" and adding a 3rd drive to the same machine "--scsi3", do not use the same number everytime (it will replace drives already added to the vm)
to check a configurationg file (I havent used these yet)
grep DISKSERIAL /etc/pve/qemu-server/VIMID.conf
scsiX: /dev/disk/by-id/DISK
to remove the disk from a vm
qm unlink VIMID --idlist scsiX
update vm VIMID: --delete scsiX
[share]
comment = comment
path = /mount/point
browseable = yes
writeable = yes
read only = no
guest ok = no
smbpasswd -a USER and restart the smbd serverusername=SAMBAUSER
password=SAMBAPASSWORD
chown -R 100000:1010000 /mount/pointmp0: /HOSTMNTPOINT/DRIVE,mp=/CONTAINERMNTPOINT/DRIVEadduser jellyfin aaronlsusb for the device name/ID, the format should look something like "058f:6387"qm set VIMID -usb0 host=USB-IDlsusbpvecm nodes to get a list of nodes and pvecm delnode NODENAME to delete a node
rm -r NODENAME and refresh the web pagenano /etc/motdnano /etc/profile.d/FILENAMEecho -e ""
echo -e "\e[1mpulse LXC Container\e[0m"
echo -e " 🌐 \e[33mProvided by: \e[1;92mmyself ORG \e[33m| Gitlab: \e[1;92mhttps://gitlab.peanutsmediaserver.com/aaron\e[0m"
echo ""
echo -e " 🖥️ \e[33mOS: \e[1;92mDebian GNU/Linux - Version: 12\e[0m"
echo -e " 🏠 \e[33mHostname: \e[1;92m$(hostname)\e[0m"
echo -e " 💡 \e[33mIP Address: \e[1;92m$(hostname -I | awk '{print $1}')\e[0m"
echo ""
apt remove "*nvidia*" -yapt purge "*nvidia*" -y