
dgaglioni
Members-
Posts
9 -
Joined
-
Last visited
dgaglioni's Achievements
Newbie (1/14)
2
Reputation
-
Thank you so much!
-
Thank you Jorge, I did a cat on fstab and that's all information I got: [email protected]:~# cat /etc/fstab /dev/sda1 /boot vfat rw,flush,noatime,nodiratime,dmask=77,fmask=177,shortname=mixed /boot/bzmodules /lib/modules squashfs ro,defaults /boot/bzfirmware /lib/firmware squashfs ro,defaults tmpfs /dev/shm tmpfs defaults hugetlbfs /hugetlbfs hugetlbfs defaults I was hoping to see the mount options for the pools I have, do you know where I can find it so I can add the autodefrag option?
-
How can I mount my pool with autodefrag enabled using btrfs?
-
Hi guys, I'm running version 6.11.2 and the installation is stuck, below the outputs: plugin: installing: snmp.plg Executing hook script: pre_plugin_checks plugin: downloading: snmp.plg ... 100% plugin: downloading: snmp.plg ... done Executing hook script: pre_plugin_checks +============================================================================== | Skipping package perl-5.32.0-x86_64-1 (newer vesion already installed) +============================================================================== +============================================================================== | Skipping package libnl-1.1.4-x86_64-3 (already installed) +============================================================================== +============================================================================== | Skipping package net-snmp-5.9-x86_64-1 (already installed) +============================================================================== +============================================================================== | Skipping package unraid-snmp-2021.05.21-x86_64-1 (already installed) +============================================================================== +============================================================================== | Testing SNMP by listing mounts, /boot should be present +============================================================================== snmpwalk -v 2c localhost -c public hrFSMountPoint snmpwalk failureCouldn't find /boot mount point. SNMP output: Timeout: No Response from localhost plugin: run failed: /bin/bash Executing hook script: post_plugin_checks
-
GPU Passthrough kvm run failed Bad address
dgaglioni replied to dgaglioni's topic in VM Engine (KVM)
Installed an old NVIDIA driver and everything is working fine for a few days, I'm using version 472.12. -
Hi everyone, I suspect that the latest NVIDIA driver (526.47) is not working properly with passthrough on Unraid 6.11.1, after I launch a game and start playing for about 5 minutes my Win10 VM pause and I get the error "error: kvm run failed Bad address" as you can see in the logs below. I've tried configure ACS (print below) and the /config/go (outputs below) file to try fixing the issue but it does not work. For troubleshooting purpose I have run FurMark for 15 minutes and no issues with the VM. I tried the following games, and all crashed the VM: Quake 1 RTX Flight Simulator The Witcher 3 The only way I found to have the GPU back after VM crash is rebooting the server. My server spec: Ryzen 5900X (8 cores for Win10 VM) Asus X570 TUF PLUS (BIOS 4403) 32GB RAM 3200Mhz (16GB for Win10 VM) RTX 3080 10GB Gainward Corsair HX850i I really appreciate any help on this issue. Logs output: -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \ -chardev socket,id=charchannel0,fd=35,server=on,wait=off \ -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \ -audiodev '{"id":"audio1","driver":"none"}' \ -device '{"driver":"vfio-pci","host":"0000:0b:00.0","id":"hostdev0","bus":"pci.4","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:0b:00.1","id":"hostdev1","bus":"pci.5","addr":"0x0"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/1 (label charserial0) 2022-10-31T06:59:32.271373Z qemu-system-x86_64: terminating on signal 15 from pid 6224 (/usr/sbin/libvirtd) 2022-10-31 06:59:33.569+0000: shutting down, reason=shutdown 2022-10-31 06:59:56.964+0000: starting up libvirt version: 8.7.0, qemu version: 7.1.0, kernel: 5.19.14-Unraid, hostname: NAS LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME='/var/lib/libvirt/qemu/domain-4-Windows 10' \ XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-4-Windows 10/.local/share' \ XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-4-Windows 10/.cache' \ XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-4-Windows 10/.config' \ /usr/local/sbin/qemu \ -name 'guest=Windows 10,debug-threads=on' \ -S \ -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-4-Windows 10/master-key.aes"}' \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/5a2e6419-57dc-a0fa-00b8-62324f670400_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-q35-7.1,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -accel kvm \ -cpu host,migratable=on,topoext=on,host-cache-info=on,l3-cache=off \ -m 16384 \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":17179869184}' \ -overcommit mem-lock=off \ -smp 16,sockets=1,dies=1,cores=8,threads=2 \ -uuid 5a2e6419-57dc-a0fa-00b8-62324f670400 \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=41,server=on,wait=off \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device '{"driver":"pcie-root-port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x1"}' \ -device '{"driver":"pcie-pci-bridge","id":"pci.2","bus":"pci.1","addr":"0x0"}' \ -device '{"driver":"pcie-root-port","port":9,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x1"}' \ -device '{"driver":"pcie-root-port","port":10,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x2"}' \ -device '{"driver":"pcie-root-port","port":11,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x3"}' \ -device '{"driver":"pcie-root-port","port":12,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x4"}' \ -device '{"driver":"nec-usb-xhci","p2":15,"p3":15,"id":"usb","bus":"pcie.0","addr":"0x7"}' \ -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.3","addr":"0x0"}' \ -blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/nvme-SAMSUNG_MZVLB512HBJQ-00000_S4GENX0R144120","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \ -device '{"driver":"ide-hd","bus":"ide.2","drive":"libvirt-1-format","id":"sata0-0-2","bootindex":1,"write-cache":"on"}' \ -netdev tap,fd=42,id=hostnet0 \ -device '{"driver":"vmxnet3","netdev":"hostnet0","id":"net0","mac":"52:54:00:08:6e:b2","bus":"pci.2","addr":"0x1"}' \ -chardev pty,id=charserial0 \ -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \ -chardev socket,id=charchannel0,fd=40,server=on,wait=off \ -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \ -audiodev '{"id":"audio1","driver":"none"}' \ -device '{"driver":"vfio-pci","host":"0000:0b:00.0","id":"hostdev0","bus":"pci.4","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:0b:00.1","id":"hostdev1","bus":"pci.5","addr":"0x0"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/1 (label charserial0) error: kvm run failed Bad address RAX=0000000000000000 RBX=ffffb67cce2d8000 RCX=0000000000800001 RDX=0000000000000000 RSI=0000000000000000 RDI=ffffba0548a82000 RBP=fffff40dc6a495d0 RSP=fffff40dc6a494d0 R8 =0000000000000000 R9 =0000000000000000 R10=fffff8046881be20 R11=fffff40dc6a494b0 R12=00000000000002f7 R13=0000000000000000 R14=0000000000000387 R15=0000000000000000 RIP=fffff804821b1765 RFL=00040246 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS [-WA] CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] SS =0018 0000000000000000 00000000 00409300 DPL=0 DS [-WA] DS =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS [-WA] FS =0053 0000000000000000 00033c00 0040f300 DPL=3 DS [-WA] GS =002b ffffa68015ec0000 ffffffff 00c0f300 DPL=3 DS [-WA] LDT=0000 0000000000000000 00000000 00000000 TR =0040 ffffa68015ece000 00000067 00008b00 DPL=0 TSS64-busy GDT= ffffa68015ecffb0 00000057 IDT= ffffa68015ecd000 00000fff CR0=80050033 CR2=0000027f2af50e30 CR3=0000000183220000 CR4=00350ef8 DR0=00007ff604d108f0 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000004d01 Code=00 48 8b 01 48 8b 40 58 ff 15 e9 32 c4 ff 8b f0 85 c0 75 47 <8b> 43 08 83 f8 04 72 c3 41 8b d4 c1 e2 10 42 80 bc 3f a8 04 00 00 01 75 7a 41 8b c6 42 2b 2022-10-31T07:13:25.739382Z qemu-system-x86_64: terminating on signal 15 from pid 6224 (/usr/sbin/libvirtd) 2022-10-31 07:14:13.772+0000: shutting down, reason=destroyed ------------------------------------------------------------------------------------------------------------------------------------ VM XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>5a2e6419-57dc-a0fa-00b8-62324f670400</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='12'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='13'/> <vcpupin vcpu='4' cpuset='2'/> <vcpupin vcpu='5' cpuset='14'/> <vcpupin vcpu='6' cpuset='3'/> <vcpupin vcpu='7' cpuset='15'/> <vcpupin vcpu='8' cpuset='4'/> <vcpupin vcpu='9' cpuset='16'/> <vcpupin vcpu='10' cpuset='5'/> <vcpupin vcpu='11' cpuset='17'/> <vcpupin vcpu='12' cpuset='6'/> <vcpupin vcpu='13' cpuset='18'/> <vcpupin vcpu='14' cpuset='7'/> <vcpupin vcpu='15' cpuset='19'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/5a2e6419-57dc-a0fa-00b8-62324f670400_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/nvme-SAMSUNG_MZVLB512HBJQ-00000_S4GENX0R144120'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='nec-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:08:6e:b2'/> <source bridge='br0'/> <model type='vmxnet3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> ------------------------------------------------------------------------------------------------------------------------------------ Go file config: [email protected]:/boot/config# cat go #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & #fix video for VM echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind ------------------------------------------------------------------------------------------------------------------------------------ ACS config print: Thank you!
-
LLDP would be a godsend - please can we have it?
dgaglioni replied to salvdordalisdad's topic in Feature Requests
Please add it to Unraid, a simple toggle button would be enough. -
Thank you for the help @JonathanM ! I just watched Spaceinvader One video about cache pools and what you suggest works perfectly! I'll deploy in the following manner: Array Devices 1 x 12TB Parity (I'll purchase another 12TB HDD) 1 x 12TB Disk 1 (xfs) Pool Devices 6 x 2TB SSD (with btrfs RAID 0) I'll use veeam for backup the VMs to the 12TB HDD, if something goes wrong with the RAID 0 I'll have everything on 12TB HDD with parity! Thanks!
-
Hi everyone, I'm really enjoying Unraid and already have the Pro license to create my new NAS, currently I'm testing Unraid on my vmware workstation before I migrate my pc to Unraid. Here is my dilemma, my goal is have this six SSDs as one drive (due to many VMs) and maybe use this 12TB HDD as daily backup of my SSD drives. I have some ideas how to accomplish that but I really would some some guidance and help from more experienced users, here are my ideas: 1 - Use AMD software RAID 0 with 6 SSDs, on Unraid add 12TB Raid 0 SSDs and 12TB HDD, find a solution for a daily backup from SSDs data disk RAID 0 to data disk 12TB HDD. (maybe the easiest?) 2 - Add the six SSDs with one for parity on the Array, find a solution to automatically spilt in some size (ex: 200GB) the vdisk of the VMs and span the files through the disks. (seems almost impossible) 3 - Use btrfs raid 5 with the six SSDs and 12TB HDD as data disks, also find a solution for a daily backup from SSDs data disk RAID 5 to data disk 12TB HDD. (seems impossible) 4 - Buy a hardware controller to perform the RAID 5 with the six SSD, add as data disk and again find a solution for a daily backup from SSDs data disk RAID 5 to data disk 12TB HDD. (expensive one, but feasible?) Please if you have other idea/solution please let me know, I'm not experienced in storage/backup. My hardware: Ryzen 5900X Asus X570 TUF 6XSSD 1x12TB HDD 64GB RAM Thanks for any insights!