golli53

Members
  • Posts

    81
  • Joined

Everything posted by golli53

  1. The ssd2 is from running: /usr/sbin/cryptsetup luksOpen /dev/nvme1n1p1 ssd2 --allow-discards --key-file /root/keyfile
  2. Got it - is the error above when trying degraded mode ("wrong fs type, bad option, bad superblock on /dev/mapper/ssd2, missing codepage or helper program, or other error.") due to the bug that you referenced? Mounting as a pool unfortunately hangs forever
  3. Oh shoot - I didn't see this bug. I do think it was created 6.7+. On second reboot, the bad drive (nvme0n1, ssd) re-appeared in unRAID, but can't mount the pool (hangs). Then tried mounting with degraded using the good drive (nvme1n1, ssd2) - see below. Am I toast? login as: root Linux 4.19.56-Unraid. root@Tower:~# /usr/sbin/cryptsetup luksOpen /dev/nvme1n1p1 ssd2 --allow-discards --key-file /root/keyfile root@Tower:~# mkdir /mnt/disks/ssd root@Tower:~# /sbin/mount -o usebackuproot,ro '/dev/mapper/ssd' '/mnt/disks/ssd' ^C root@Tower:~# /sbin/mount -o degraded,usebackuproot,ro '/dev/mapper/ssd2' '/mnt/disks/ssd' mount: /mnt/disks/ssd: wrong fs type, bad option, bad superblock on /dev/mapper/ssd2, missing codepage or helper program, or other error.
  4. Thanks- I rebooted and the drive no longer shows up under Unassigned Devices. I guess the drive (or mobo controller) failed. What would you recommend as the safest way to recover the data from the other (hopefully) still good drive in the RAID1 btrfs array?
  5. These NVME drives were mounted as unassigned devices in RAID1 and were running my Win10 VMs. Not sure whether this was due to a hardware or filesystem issue. The directory on the drive that contained my VMs (that were running at the time and crashed after the errors) shows up as empty now (all VM images missing). Not sure whether it is s afe to disconnect/reconnect the drives or best way to proceed. Would appreciate any advice. [edit] I forced stopped my VM and disabled virtualization in the GUI - I don't know if this caused my VM images to go missing on disk. My highest priority is recovering these images Sep 5 14:33:22 Tower kernel: nvme nvme0: I/O 408 QID 9 timeout, aborting Sep 5 14:33:23 Tower kernel: nvme nvme0: I/O 409 QID 9 timeout, aborting Sep 5 14:33:23 Tower kernel: nvme nvme0: I/O 410 QID 9 timeout, aborting Sep 5 14:33:23 Tower kernel: nvme nvme0: I/O 411 QID 9 timeout, aborting Sep 5 14:33:23 Tower kernel: nvme nvme0: I/O 737 QID 7 timeout, aborting Sep 5 14:33:23 Tower kernel: nvme nvme0: I/O 738 QID 7 timeout, aborting Sep 5 14:33:23 Tower kernel: nvme nvme0: I/O 739 QID 7 timeout, aborting Sep 5 14:33:23 Tower kernel: nvme nvme0: I/O 198 QID 8 timeout, aborting Sep 5 14:33:52 Tower kernel: nvme nvme0: I/O 408 QID 9 timeout, reset controller Sep 5 14:34:23 Tower kernel: nvme nvme0: I/O 11 QID 0 timeout, reset controller Sep 5 14:35:23 Tower kernel: nvme nvme0: Device not ready; aborting reset Sep 5 14:35:23 Tower kernel: nvme nvme0: Abort status: 0x7 Sep 5 14:35:23 Tower kernel: nvme nvme0: Abort status: 0x7 Sep 5 14:35:23 Tower kernel: nvme nvme0: Abort status: 0x7 Sep 5 14:35:23 Tower kernel: nvme nvme0: Abort status: 0x7 Sep 5 14:35:23 Tower kernel: nvme nvme0: Abort status: 0x7 Sep 5 14:35:23 Tower kernel: nvme nvme0: Abort status: 0x7 Sep 5 14:35:23 Tower kernel: nvme nvme0: Abort status: 0x7 Sep 5 14:35:23 Tower kernel: nvme nvme0: Abort status: 0x7 Sep 5 14:35:42 Tower dhcpcd[2016]: br0: failed to renew DHCP, rebinding Sep 5 14:35:42 Tower dhcpcd[2016]: br0: truncated packet (180) from 192.168.10.10 Sep 5 14:35:42 Tower dhcpcd[2016]: br0: truncated packet (180) from 192.168.10.10 Sep 5 14:35:42 Tower dhcpcd[2016]: br0: truncated packet (152) from 192.168.10.10 Sep 5 14:35:42 Tower dhcpcd[2016]: br0: truncated packet (152) from 192.168.10.10 Sep 5 14:35:42 Tower dhcpcd[2016]: br0: truncated packet (132) from 192.168.10.12 Sep 5 14:35:42 Tower dhcpcd[2016]: br0: truncated packet (132) from 192.168.10.12 Sep 5 14:35:42 Tower dhcpcd[2016]: br0: truncated packet (24) from 192.168.10.12 Sep 5 14:35:42 Tower dhcpcd[2016]: br0: truncated packet (24) from 192.168.10.12 Sep 5 14:35:54 Tower kernel: nvme nvme0: Device not ready; aborting reset Sep 5 14:35:54 Tower kernel: nvme nvme0: Removing after probe failure status: -19 Sep 5 14:36:24 Tower kernel: nvme nvme0: Device not ready; aborting reset Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 1, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1419705920 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1426329664 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1313322472 Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 3, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 726592960 Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 4, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 5, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 6, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1416326976 Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 7, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1426329600 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1385254080 Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 9, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 9, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 10, rd 0, flush 0, corrupt 0, gen 0 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1416326896 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1419705792 Sep 5 14:36:24 Tower kernel: print_req_error: I/O error, dev nvme0n1, sector 1355724792 Sep 5 14:36:24 Tower kernel: nvme nvme0: failed to set APST feature (-19) Sep 5 14:36:24 Tower kernel: BTRFS: error (device dm-14) in btrfs_run_delayed_refs:2935: errno=-5 IO failure Sep 5 14:36:24 Tower kernel: BTRFS info (device dm-14): forced readonly Sep 5 14:36:24 Tower kernel: BTRFS: error (device dm-14) in __btrfs_free_extent:6803: errno=-5 IO failure Sep 5 14:36:24 Tower kernel: BTRFS: error (device dm-14) in btrfs_run_delayed_refs:2935: errno=-5 IO failure Sep 5 14:36:24 Tower kernel: BTRFS warning (device dm-14): Skipping commit of aborted transaction. Sep 5 14:36:24 Tower kernel: BTRFS: error (device dm-14) in cleanup_transaction:1846: errno=-5 IO failure Sep 5 14:36:24 Tower kernel: BTRFS info (device dm-14): delayed_refs has NO entry Sep 5 14:36:32 Tower kernel: btrfs_dev_stat_print_on_error: 641 callbacks suppressed Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 268, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 269, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 270, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 271, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 272, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 273, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 274, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 275, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 276, flush 0, corrupt 0, gen 0 Sep 5 14:36:32 Tower kernel: BTRFS error (device dm-14): bdev /dev/mapper/ssd errs: wr 385, rd 277, flush 0, corrupt 0, gen 0
  6. def toggle_unraid_share(url, setting=True, share_name='share'): from requests_html import HTMLSession session = HTMLSession() r = session.get(url) r.html.render() token = re.search(r"csrf_token=([A-Z0-9]+)", r.html.full_text).groups()[0] r2 = session.post(f"{url}/update.htm", data={'shareName': share_name, 'shareExport': 'e' if setting else '-', 'shareSecurity': 'public', 'changeShareSecurity': 'Apply', 'csrf_token': token}) r2.raise_for_status() session.close() Here's the script in function form. Requires Python 3 and requests_html. Should be fairly easy to generalize it to other settings changes (by changing the data argument in the sessions.post call). URL is your unraid address, incl http(s)://
  7. Couldn't figure this out using the shell, but ended up writing a short Python script to toggle options through HTTP to the webgui. CSRF tokens made it a bit trickier. Can send it out if anyone is also looking for a solution.
  8. Thanks. I ended up replacing it. In general, wish it would prefer user share writes away from the disk to keep parity valid, although I guess it might be impossible to tell if the first write error that brought the disk down actually affected the data or not.
  9. Getting "no such device" and immediate exit with every version - have tried all the way back to 2.07
  10. One of my drives failed over-night. In the SMART stats, "Current pending sector" and "Offline uncorrectable" seem to be new issues while the "UDMA CRC error count" occurred a week ago and didn't disable the disk (and I replaced the cable upon seeing it). I plan on doing a SMART extended test. I'm in the process of rebuilding another failed disk, so I'll wait for that finish. Luckily, I just finished building a second parity a couple days ago. Does unRAID prioritize against writing to a disabled/emulated disk? And is there a way to see if it was a write or read that brought it down (or does unRAID only disable on write)? Trying to figure out if I can skip rebuild.
  11. I'm having issues with my UD no longer mounting my 2-disk BTRFS RAID1 pool ("ssd" = nvme0n1p1 set to automount + "ssd2" = nvme1n1p1). The 2nd disk [mounted to "ssd2"] of the pool (not set to automount) doesn't get luksOpen'ed and therefore is "missing," so the automount of the 1st disk fails. Jul 22 12:33:02 Tower unassigned.devices: Mounting 'Auto Mount' Devices... Jul 22 12:33:02 Tower unassigned.devices: Adding disk '/dev/mapper/ssd'... Jul 22 12:33:03 Tower kernel: BTRFS: device fsid 4939602d-ea6f-4dd0-a535-252580b60aac devid 1 transid 11909816 /dev/dm-13 Jul 22 12:33:03 Tower unassigned.devices: Mount drive command: /sbin/mount '/dev/mapper/ssd' '/mnt/disks/ssd' Jul 22 12:33:03 Tower kernel: BTRFS info (device dm-13): disk space caching is enabled Jul 22 12:33:03 Tower kernel: BTRFS info (device dm-13): has skinny extents Jul 22 12:33:03 Tower kernel: BTRFS error (device dm-13): devid 2 uuid f66ef13b-8a93-45da-b73e-0d6a478729b2 is missing Jul 22 12:33:03 Tower kernel: BTRFS error (device dm-13): failed to read chunk tree: -2 Jul 22 12:33:03 Tower kernel: BTRFS error (device dm-13): open_ctree failed Jul 22 12:33:03 Tower emhttpd: Warning: Use of undefined constant luks - assumed 'luks' (this will throw an Error in a future version of PHP) in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 634 Jul 22 12:33:03 Tower unassigned.devices: Mount of '/dev/mapper/ssd' failed. Error message: mount: /mnt/disks/ssd: wrong fs type, bad option, bad superblock on /dev/mapper/ssd, missing codepage or helper program, or other error. Jul 22 12:33:03 Tower unassigned.devices: Partition 'Samsung_SSD_970_PRO_1TB_S462NF0M300357X' could not be mounted... Jul 22 12:33:03 Tower unassigned.devices: Disk with serial 'Samsung_SSD_970_PRO_1TB_S462NF0M311892X', mountpoint 'ssd2' is not set to auto mount and will not be mounted... After every array start, I have to execute luksOpen on "ssd2" and mount "ssd" manually (see below). Then after this, unmount/mount works again in the GUI. root@Tower:~# /usr/sbin/cryptsetup luksOpen /dev/nvme1n1p1 ssd2 --allow-discards --key-file /root/keyfile root@Tower:~# mkdir /mnt/disks/ssd root@Tower:~# /sbin/mount '/dev/mapper/ssd' '/mnt/disks/ssd' I think this happened after I renamed the mounts and physically changed slots, though it seemed to work after one restart, but then broke after the next restart. Have restarted a few times since and it never works anymore, even if I change the names. If I switch the nvme1n1 to automount instead of nvme0n1, it still results in the same error. tower-diagnostics-20190722-1649.zip
  12. Thanks for all your work on this - has been invaluable to my unRAID setup! Would it be possible to add ledmon to control backplane/bay HDD LEDs? Would be great to be able to turn these off at night.
  13. Expected: - running VM to be hibernated upon restart using GUI. the hibernation state should be preserved after restart - already hibernated VM to preserve hibernation state upon restart using GUI - possibly the same upon start and stop array using GUI What happens: - When hitting restart using GUI, unknown whether VM hibernates. Either way, hibernation state is not preserved after restart - When stopping and starting array using GUI, either 1) qemu-system-x86_64 prevents stop or 2) stop is ok but libvirt.img is locked by qemu-system-x86_64 upon restart preventing libvirt from starting (VM tab won't load) Here's the error message when libvirt.img fails to start: Tower root: /mnt/disks/ssd/system/libvirt/libvirt.img is in-use, cannot mount When I use fuser and ps to find out which processes are locking it, I see the hibernated VM: /usr/bin/qemu-system-x86_64 -name guest=win10,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-win10/master-key.aes -machine pc-i440fx-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/2f4360e3-a8a0-ca29-a732-4f1191b06173_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 4096 -realtime mlock=off -smp 2,sockets=1,cores=1,threads=2 -uuid 2f4360e3-a8a0-ca29-a732-4f1191b06173 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=26,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device nec-usb-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x7 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/mnt/disks/ssd/domains/win10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:1a:cb:11,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0,websocket=5700 -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on (Also, my docker containers were listed as locking it as well for some reason which I wasn't sure was normal, but I turned off autostart on all dockers to remove that as a source of noise) Tried: - multiple Win10 (Pro) VMs including completely clean configs - storing libvirt.img and VM domains on /mnt/cache as well as unassigned device - hibernate using 1) host "virsh dompmsuspend <domain> disk", 2) guest start menu, 3) guest PassMark PC Sleep test tool (which confirmed S4 sleep worked) Notes: - when hibernated by any method, the VM shows up as "paused" in GUI, not "stopped" - not sure if this is the norm - when hibernated by any method, virsh shows the domain as "pmsuspended" as expected - I can resume from hibernate fine - qemu-agent is installed and I confirmed the service is running using PowerShell This is a real pain point as I have multiple VMs running at any given time relied on by multiple users, and it's a real pain to make sure everything is shut off whenever I troubleshoot the array or add/change a disk tower-diagnostics-20190719-1543.zip
  14. One additional piece of info in case it might be helpful for development. After fixing the header, I tried reassigning the disk to cache again after getting rid of the empty 2nd cache slot (I made a backup before!). It mounted fine and all the files were preserved. I think the crazy header overwriting was somehow triggered by having an empty 2nd cache slot (2 other disks were previously used for cache in RAID1 LUKS BTRFS and I didn't change explicitly # of slots from 2 to 1 after emptying the disk selections).
  15. Some of the files were created using cp --reflink to save space. When I try copying with cp, this deduplication is not preserved. --reflink=always fails, reporting "Invalid cross-device link" Both disks are single disk BTRFS (LUKS). You can see the difference in the size on disk for the domains directory I copied below: root@server:/mnt/cache# cp -a --reflink=auto /mnt/disks/ssd/domains /mnt/cache/domains
  16. Did you install qemu guest agent? That fixed this for me before
  17. Thanks - those did not work because it was not even being recognized as LUKS. Thankfully I ended up fixing the issue. For whatever reason, when unRAID failed to mount the drive as cache, it overwrote the first 6 bytes of the LUKS header with zeroes. Thankfully, these bytes of the LUKS header are standard, so I used a hex editor and dd to correct them. Hopefully, these commands might help someone else encountering the same situation in the future: dd if=/dev/nvme1n1p1 of=/mnt/user/share/broken_header.bin bs=512 count=1 root@Tower:~# dd if=/dev/nvme1n1p1 bs=48 count=1 | hexdump -C 1+0 records in 1+0 records out 48 bytes copied, 0.0134741 s, 38.0 kB/s 00000000 00 00 00 00 00 00 00 01 61 65 73 00 00 00 00 00 |........aes.....| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 78 74 73 2d 70 6c 61 69 |........xts-plai| root@Tower:~# dd if=/dev/sdl1 bs=48 count=1 | hexdump -C 1+0 records in 1+0 records out 48 bytes copied, 0.00186538 s, 25.7 kB/s 00000000 4c 55 4b 53 ba be 00 01 61 65 73 00 00 00 00 00 |LUKS....aes.....| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 78 74 73 2d 70 6c 61 69 |........xts-plai| After fixing with a hex editor, using the bytes from sdl1 above: dd if=/mnt/user/share/fixed_header.bin of=/dev/nvme1n1p1 bs=512 count=1
  18. I tried to see if I could move my 1TB Samsung NVME SSD from an unassigned device to cache seamlessly. It was formatted as LUKS-encrypted BTRFS (single partition). When it turned out to be unmountable, I immediately stopped the array. I did NOT hit format, so I didn't think any changes would be made. However, this device is now unmountable (with no filesystem being shown). Is there any way I can recover my data? I have all my critical data on this device (all my VM drives) and was foolish and did not make backups. [edit] Before assigning it to cache, I had the cache as a RAID 1 LUKS-encrypted BTRFS pool. So, I assigned it as 1 of the 2 cache devices and left the other slot empty before hitting "start." Here is the relevant result from "blkid" /dev/nvme1n1: PTUUID="xxxxxx" PTTYPE="dos" /dev/nvme1n1p1: PARTUUID="xxxxxx-01"
  19. I also see many of my docker containers using libvirt.img when using fuser. I'm concerned whether it might be causing some of my Win10 hibernation issues. Can anyone confirm whether "fuser -c /mnt/user/system/libvirt/libvirt.img" should normally show docker pids?
  20. Would it work to assign both containers to the same docker network and specify the IP of the SNI Proxy container in a config file or environment variable (ie https://sniproxy)? Otherwise, if you really need the same IP, you would need to make a new Dockerfile to combine the two containers into one
  21. I am trying to turn on/off certain samba shares on a schedule using cron. What's the best command to pass? All I can think of is using sed to edit /etc/samba/smb-shares.conf, but this seems really complicated and would break the webgui options that were set. Is there a command that mimics what the webgui is doing when it turns on/off samba export or changes security options for a given share?
  22. No, but I just got the chance to enable it and do some tests. Now, libvirt simply fails to start up when I hibernate my VM and stop+start my array. Here's the error message: Tower root: /mnt/disks/ssd/system/libvirt/libvirt.img is in-use, cannot mount When I use fuser and ps to find out which processes are locking it, I get the hibernated VM: /usr/bin/qemu-system-x86_64 -name guest=win10,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-win10/master-key.aes -machine pc-i440fx-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/2f4360e3-a8a0-ca29-a732-4f1191b06173_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 4096 -realtime mlock=off -smp 2,sockets=1,cores=1,threads=2 -uuid 2f4360e3-a8a0-ca29-a732-4f1191b06173 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=26,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device nec-usb-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x7 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/mnt/disks/ssd/domains/win10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:1a:cb:11,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0,websocket=5700 -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on (Also, my docker containers were listed as locking it as well for some reason which I wasn't sure was normal, but I turned off autostart on all dockers to remove that as a source of noise) Tests I've tried (all yield the same error as above): - Manually hibernating the VM - Leaving it running and letting unRAID hibernate it upon array stop - Increasing the time given to VM shutdown/hibernate to 3 minutes under VM settings - Rebooting and retrying above This was a clean Win10 install with just qemu-img installed and no pass-through devices. So, it looks like the VM hibernate option breaks libvirt when the VM resides on an Unassigned Device? PS Not sure if it matters but the Unassigned Device for libvirt.img and my VM images is formatted as btrfs-encrypted. Unraid 6.7.2. Unassigned Devices 2019.06.18
  23. I'm not sure that's correct. My VMs currently run from an unassigned NVME drive. I can't stop my array without killing my VMs (the VM settings page becomes disabled and any previously started or paused VMs become stopped upon restart). Does it work differently for you?
  24. Oops I forgot to mention I tried this as well. Same thing - it works but then powers right back up.