frosty_hedgehog

Members
  • Posts

    15
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

frosty_hedgehog's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I ended up getting the VM to boot but the error still shows up in logs. I've decided I can live with it as long as it isn't impacting the VMs performance
  2. After working for over a week, I had to restart Unraid and now my VM won't boot, simply loads with a blank screen after the unraid logo. The VM is using a PGU passthrough with an RTX 3060. XML file and system diagnostics below <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='1'> <name>gaming</name> <uuid>42432866-ee29-adfb-c810-9124648faedf</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> <vcpupin vcpu='4' cpuset='8'/> <vcpupin vcpu='5' cpuset='9'/> <vcpupin vcpu='6' cpuset='10'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/42432866-ee29-adfb-c810-9124648faedf_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/adata/steam_vdisk/steam/gaming/vdisk1.img' index='3'/> <backingStore/> <target dev='hdc' bus='virtio'/> <serial>vdisk3</serial> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows10.iso' index='2'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.240-1.iso' index='1'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:c8:32:00'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-gaming/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='qemu-vdagent'> <source> <clipboard copypaste='yes'/> <mouse mode='client'/> </source> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/isos/vbios/EVGA 3060 12GB vbios.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> This is the error I get -msg timestamp=on char device redirected to /dev/pts/0 (label charserial0) qxl_send_events: spice-server bug: guest stopped, ignoring 2024-01-30T02:24:18.635628Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-01-30T02:24:18.635670Z qemu-system-x86_64: vfio_dma_map(0x1498f24eda00, 0x381800000000, 0x10000000, 0x1498d6200000) = -2 (No such file or directory) 2024-01-30T02:24:18.635792Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-01-30T02:24:18.635795Z qemu-system-x86_64: vfio_dma_map(0x1498f24eda00, 0x381810000000, 0x2000000, 0x1498d4200000) = -22 (Invalid argument) Not sure what else to do, I've tried restarting the server a couple of times and its still the same. prodesk-diagnostics-20240129-2139.zip
  3. Worked after data rebuild was done, thank you!
  4. I'm hoping someone can help me understand why my docker service is failing to start. I had a drive fail, replacing it with a larger drive so I've done a parity swap which was successful and unraid is currently rebuilding the data on the replacement drive. I understand that the system should function as normal during this process but the docker service isn't starting. I have attched system diagnostics prodesk-diagnostics-20240102-1953.zip
  5. I rebuilt my server on Friday with the same config and this issue hasn't happened since then. I might need to give it a few days to be sure its solved but its possible that whatever power/connection issue I had got corrected.
  6. This is great! Somehow never found that link in all my searches for a solution. Thank you
  7. I currently have one disk missing on my array after I dropped it while rebuilding my server. I was already looking to buy more drives so I got two 18TB drives, however, my current parity is 14TB. My setup looks like this Parity - 14TB Array - 3x 8TB (one missing) What I'd like to get to Parity - 18TB Array - 2x 8TB, 1x 14TB, 1x 18TB Trying to figure out how best to go about this since my parity is currently emulating a missing disk. This is what I was thinking Add one 18TB drive as a second parity Allow dual parity build (assuming that's what will happen) After successful parity build, add second 18TB drive to unraid to replace missing drive After data is rebuilt, take out 14TB parity drive and add it back to the array Does this make sense? Or am I missing anything?
  8. I'm unable to pass the login page with the credentials I provided. Is this broken at the moment? Only other thing I changed during install was the backend port because it was in use by something else
  9. Thanks for the heads up, I'd take a look at this now! Hopefully it solves or at least eases the problem
  10. I've been dealing with multiple instances of high IO wait where the GUI freezes and I can't access any files for hours at a time. This happens at random and last night I had to press the power button on the server to send a system shutdown command (not a long press for a hard shutdown) and it took almost an hour for the system to go down Attaching the diagnostics file here prodesk-diagnostics-20231122-2147.zip Any help identifying what the issue might be will be appreciated
  11. Thanks for the update and suggestion! Excluding the file from backup helped solve the issue
  12. I'm running into an issue with my backups, I get the error. The backup files are there but retention is not checked and it is marked as failed. Any idea how I resolve this? [12.11.2023 01:33:27][❌][tdarr] tar verification failed! Tar said: tar: Removing leading `/' from member names; mnt/cache/appdata/tdarr/logs/Tdarr_Node_Log.txt: Mod time differs; mnt/cache/appdata/tdarr/logs/Tdarr_Node_Log.txt: Size differs Full log below [12.11.2023 01:00:01][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D [12.11.2023 01:00:01][ℹ️][Main] Backing up from: /mnt/user/appdata, /mnt/cache/appdata [12.11.2023 01:00:01][ℹ️][Main] Backing up to: /mnt/user/backups/appdata/ab_20231112_010001 [12.11.2023 01:00:12][ℹ️][Main] Selected containers: Notifiarr, Unraid-Cloudflared-Tunnel, bazarr, binhex-delugevpn, binhex-krusader, binhex-readarr, binhex-sabnzbdvpn, calibre, firefox, jellyfin, netdata, overseerr2, overseerr4k, plex, plex-meta-manager, prowlarr, radarr2, radarr4K, scrutiny, sonarr2, sonarr4K, tautulli, tdarr, tdarr_node [12.11.2023 01:00:12][ℹ️][Main] Saving container XML files... [12.11.2023 01:00:12][ℹ️][Main] Method: Stop/Backup/Start [12.11.2023 01:00:12][ℹ️][bazarr] Stopping bazarr... done! (took 6 seconds) [12.11.2023 01:00:18][ℹ️][bazarr] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:00:18][ℹ️][bazarr] Calculated volumes to back up: /mnt/cache/appdata/bazarr [12.11.2023 01:00:18][ℹ️][bazarr] Backing up bazarr... [12.11.2023 01:00:18][ℹ️][bazarr] Backup created without issues [12.11.2023 01:00:18][ℹ️][bazarr] Verifying backup... [12.11.2023 01:00:18][ℹ️][bazarr] Starting bazarr... (try #1) done! [12.11.2023 01:00:20][ℹ️][binhex-delugevpn] Stopping binhex-delugevpn... done! (took 4 seconds) [12.11.2023 01:00:24][ℹ️][binhex-delugevpn] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:00:24][ℹ️][binhex-delugevpn] Calculated volumes to back up: /mnt/cache/appdata/binhex-delugevpn [12.11.2023 01:00:24][ℹ️][binhex-delugevpn] Backing up binhex-delugevpn... [12.11.2023 01:00:24][ℹ️][binhex-delugevpn] Backup created without issues [12.11.2023 01:00:24][ℹ️][binhex-delugevpn] Verifying backup... [12.11.2023 01:00:24][ℹ️][binhex-delugevpn] Starting binhex-delugevpn... (try #1) done! [12.11.2023 01:00:27][ℹ️][binhex-krusader] Stopping binhex-krusader... done! (took 2 seconds) [12.11.2023 01:00:29][ℹ️][binhex-krusader] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:00:29][ℹ️][binhex-krusader] Calculated volumes to back up: /mnt/cache/appdata/binhex-krusader [12.11.2023 01:00:29][ℹ️][binhex-krusader] Backing up binhex-krusader... [12.11.2023 01:17:41][ℹ️][binhex-krusader] Backup created without issues [12.11.2023 01:17:41][ℹ️][binhex-krusader] Verifying backup... [12.11.2023 01:21:59][ℹ️][binhex-krusader] Starting binhex-krusader... (try #1) done! [12.11.2023 01:22:26][ℹ️][binhex-readarr] Stopping binhex-readarr... done! (took 2 seconds) [12.11.2023 01:22:28][ℹ️][binhex-readarr] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:22:28][ℹ️][binhex-readarr] Calculated volumes to back up: /mnt/cache/appdata/binhex-readarr [12.11.2023 01:22:28][ℹ️][binhex-readarr] Backing up binhex-readarr... [12.11.2023 01:22:29][ℹ️][binhex-readarr] Backup created without issues [12.11.2023 01:22:29][ℹ️][binhex-readarr] Verifying backup... [12.11.2023 01:22:29][ℹ️][binhex-readarr] Starting binhex-readarr... (try #1) done! [12.11.2023 01:22:32][ℹ️][binhex-sabnzbdvpn] Stopping binhex-sabnzbdvpn... done! (took 1 seconds) [12.11.2023 01:22:33][ℹ️][binhex-sabnzbdvpn] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:22:33][ℹ️][binhex-sabnzbdvpn] Calculated volumes to back up: /mnt/cache/appdata/binhex-sabnzbdvpn [12.11.2023 01:22:33][ℹ️][binhex-sabnzbdvpn] Backing up binhex-sabnzbdvpn... [12.11.2023 01:22:34][ℹ️][binhex-sabnzbdvpn] Backup created without issues [12.11.2023 01:22:34][ℹ️][binhex-sabnzbdvpn] Verifying backup... [12.11.2023 01:22:34][ℹ️][binhex-sabnzbdvpn] Starting binhex-sabnzbdvpn... (try #1) done! [12.11.2023 01:22:37][ℹ️][calibre] Stopping calibre... done! (took 6 seconds) [12.11.2023 01:22:43][ℹ️][calibre] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:22:43][ℹ️][calibre] Calculated volumes to back up: /mnt/cache/appdata/calibre [12.11.2023 01:22:43][ℹ️][calibre] Backing up calibre... [12.11.2023 01:22:43][ℹ️][calibre] Backup created without issues [12.11.2023 01:22:43][ℹ️][calibre] Verifying backup... [12.11.2023 01:22:43][ℹ️][calibre] Starting calibre... (try #1) done! [12.11.2023 01:22:46][ℹ️][firefox] Stopping firefox... done! (took 6 seconds) [12.11.2023 01:22:52][ℹ️][firefox] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:22:52][ℹ️][firefox] Calculated volumes to back up: /mnt/cache/appdata/firefox [12.11.2023 01:22:52][ℹ️][firefox] Backing up firefox... [12.11.2023 01:23:26][ℹ️][firefox] Backup created without issues [12.11.2023 01:23:26][ℹ️][firefox] Verifying backup... [12.11.2023 01:23:42][ℹ️][firefox] Starting firefox... (try #1) done! [12.11.2023 01:23:44][ℹ️][jellyfin] No stopping needed for jellyfin: Not started! [12.11.2023 01:23:44][ℹ️][jellyfin] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:23:44][ℹ️][jellyfin] Calculated volumes to back up: /mnt/user/appdata/jellyfin [12.11.2023 01:23:44][ℹ️][jellyfin] Backing up jellyfin... [12.11.2023 01:24:05][ℹ️][jellyfin] Backup created without issues [12.11.2023 01:24:05][ℹ️][jellyfin] Verifying backup... [12.11.2023 01:24:11][ℹ️][jellyfin] jellyfin is being ignored, because it was not started before (or should not be started). [12.11.2023 01:24:11][ℹ️][netdata] No stopping needed for netdata: Not started! [12.11.2023 01:24:11][ℹ️][netdata] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:24:11][ℹ️][netdata] netdata does not have any volume to back up! Skipping [12.11.2023 01:24:11][ℹ️][netdata] netdata is being ignored, because it was not started before (or should not be started). [12.11.2023 01:24:11][ℹ️][Notifiarr] Stopping Notifiarr... done! (took 2 seconds) [12.11.2023 01:24:13][ℹ️][Notifiarr] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:24:13][ℹ️][Notifiarr] Calculated volumes to back up: /mnt/cache/appdata/Notifiarr [12.11.2023 01:24:13][ℹ️][Notifiarr] Backing up Notifiarr... [12.11.2023 01:24:13][ℹ️][Notifiarr] Backup created without issues [12.11.2023 01:24:13][ℹ️][Notifiarr] Verifying backup... [12.11.2023 01:24:13][ℹ️][Notifiarr] Starting Notifiarr... (try #1) done! [12.11.2023 01:24:17][ℹ️][overseerr2] Stopping overseerr2... done! (took 1 seconds) [12.11.2023 01:24:18][ℹ️][overseerr2] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:24:18][ℹ️][overseerr2] Calculated volumes to back up: /mnt/cache/appdata/binhex-overseerr [12.11.2023 01:24:18][ℹ️][overseerr2] Backing up overseerr2... [12.11.2023 01:24:18][ℹ️][overseerr2] Backup created without issues [12.11.2023 01:24:18][ℹ️][overseerr2] Verifying backup... [12.11.2023 01:24:18][ℹ️][overseerr2] Starting overseerr2... (try #1) done! [12.11.2023 01:24:21][ℹ️][overseerr4k] Stopping overseerr4k... done! (took 5 seconds) [12.11.2023 01:24:26][ℹ️][overseerr4k] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:24:26][ℹ️][overseerr4k] Calculated volumes to back up: /mnt/cache/appdata/overseerr [12.11.2023 01:24:26][ℹ️][overseerr4k] Backing up overseerr4k... [12.11.2023 01:24:26][ℹ️][overseerr4k] Backup created without issues [12.11.2023 01:24:26][ℹ️][overseerr4k] Verifying backup... [12.11.2023 01:24:26][ℹ️][overseerr4k] Starting overseerr4k... (try #1) done! [12.11.2023 01:24:29][ℹ️][plex] Stopping plex... done! (took 5 seconds) [12.11.2023 01:24:34][ℹ️][plex] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:24:34][ℹ️][plex] Calculated volumes to back up: /mnt/cache/appdata/plex [12.11.2023 01:24:34][ℹ️][plex] Backing up plex... [12.11.2023 01:29:49][ℹ️][plex] Backup created without issues [12.11.2023 01:29:49][ℹ️][plex] Verifying backup... [12.11.2023 01:31:54][ℹ️][plex] Starting plex... (try #1) done! [12.11.2023 01:31:57][ℹ️][plex-meta-manager] No stopping needed for plex-meta-manager: Not started! [12.11.2023 01:31:57][ℹ️][plex-meta-manager] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:31:57][ℹ️][plex-meta-manager] Calculated volumes to back up: /mnt/cache/appdata/plex-meta-manager [12.11.2023 01:31:57][ℹ️][plex-meta-manager] Backing up plex-meta-manager... [12.11.2023 01:31:57][ℹ️][plex-meta-manager] Backup created without issues [12.11.2023 01:31:57][ℹ️][plex-meta-manager] Verifying backup... [12.11.2023 01:31:57][ℹ️][plex-meta-manager] plex-meta-manager is being ignored, because it was not started before (or should not be started). [12.11.2023 01:31:57][ℹ️][prowlarr] Stopping prowlarr... done! (took 6 seconds) [12.11.2023 01:32:03][ℹ️][prowlarr] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:32:03][ℹ️][prowlarr] Calculated volumes to back up: /mnt/cache/appdata/prowlarr [12.11.2023 01:32:03][ℹ️][prowlarr] Backing up prowlarr... [12.11.2023 01:32:05][ℹ️][prowlarr] Backup created without issues [12.11.2023 01:32:05][ℹ️][prowlarr] Verifying backup... [12.11.2023 01:32:05][ℹ️][prowlarr] Starting prowlarr... (try #1) done! [12.11.2023 01:32:08][ℹ️][radarr2] Stopping radarr2... done! (took 5 seconds) [12.11.2023 01:32:13][ℹ️][radarr2] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:32:13][ℹ️][radarr2] Calculated volumes to back up: /mnt/cache/appdata/radarr2 [12.11.2023 01:32:13][ℹ️][radarr2] Backing up radarr2... [12.11.2023 01:32:14][ℹ️][radarr2] Backup created without issues [12.11.2023 01:32:14][ℹ️][radarr2] Verifying backup... [12.11.2023 01:32:15][ℹ️][radarr2] Starting radarr2... (try #1) done! [12.11.2023 01:32:18][ℹ️][radarr4K] Stopping radarr4K... done! (took 4 seconds) [12.11.2023 01:32:22][ℹ️][radarr4K] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:32:22][ℹ️][radarr4K] Calculated volumes to back up: /mnt/cache/appdata/radarr [12.11.2023 01:32:22][ℹ️][radarr4K] Backing up radarr4K... [12.11.2023 01:32:30][ℹ️][radarr4K] Backup created without issues [12.11.2023 01:32:30][ℹ️][radarr4K] Verifying backup... [12.11.2023 01:32:32][ℹ️][radarr4K] Starting radarr4K... (try #1) done! [12.11.2023 01:32:34][ℹ️][scrutiny] Stopping scrutiny... done! (took 10 seconds) [12.11.2023 01:32:44][ℹ️][scrutiny] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:32:44][ℹ️][scrutiny] scrutiny does not have any volume to back up! Skipping [12.11.2023 01:32:44][ℹ️][scrutiny] Starting scrutiny... (try #1) done! [12.11.2023 01:32:47][ℹ️][sonarr2] Stopping sonarr2... done! (took 4 seconds) [12.11.2023 01:32:51][ℹ️][sonarr2] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:32:51][ℹ️][sonarr2] Calculated volumes to back up: /mnt/cache/appdata/sonarr2 [12.11.2023 01:32:51][ℹ️][sonarr2] Backing up sonarr2... [12.11.2023 01:32:52][ℹ️][sonarr2] Backup created without issues [12.11.2023 01:32:52][ℹ️][sonarr2] Verifying backup... [12.11.2023 01:32:52][ℹ️][sonarr2] Starting sonarr2... (try #1) done! [12.11.2023 01:32:55][ℹ️][sonarr4K] Stopping sonarr4K... done! (took 5 seconds) [12.11.2023 01:33:00][ℹ️][sonarr4K] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:33:00][ℹ️][sonarr4K] Calculated volumes to back up: /mnt/cache/appdata/sonarr [12.11.2023 01:33:00][ℹ️][sonarr4K] Backing up sonarr4K... [12.11.2023 01:33:02][ℹ️][sonarr4K] Backup created without issues [12.11.2023 01:33:02][ℹ️][sonarr4K] Verifying backup... [12.11.2023 01:33:02][ℹ️][sonarr4K] Starting sonarr4K... (try #1) done! [12.11.2023 01:33:05][ℹ️][tautulli] Stopping tautulli... done! (took 5 seconds) [12.11.2023 01:33:10][ℹ️][tautulli] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:33:10][ℹ️][tautulli] Calculated volumes to back up: /mnt/cache/appdata/tautulli [12.11.2023 01:33:10][ℹ️][tautulli] Backing up tautulli... [12.11.2023 01:33:11][ℹ️][tautulli] Backup created without issues [12.11.2023 01:33:11][ℹ️][tautulli] Verifying backup... [12.11.2023 01:33:11][ℹ️][tautulli] Starting tautulli... (try #1) done! [12.11.2023 01:33:14][ℹ️][tdarr] Stopping tdarr... done! (took 5 seconds) [12.11.2023 01:33:19][ℹ️][tdarr] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:33:19][ℹ️][tdarr] Calculated volumes to back up: /mnt/cache/appdata/tdarr/logs, /mnt/cache/appdata/tdarr/server, /mnt/cache/appdata/tdarr/configs [12.11.2023 01:33:19][ℹ️][tdarr] Backing up tdarr... [12.11.2023 01:33:26][ℹ️][tdarr] Backup created without issues [12.11.2023 01:33:26][ℹ️][tdarr] Verifying backup... [12.11.2023 01:33:27][❌][tdarr] tar verification failed! Tar said: tar: Removing leading `/' from member names; mnt/cache/appdata/tdarr/logs/Tdarr_Node_Log.txt: Mod time differs; mnt/cache/appdata/tdarr/logs/Tdarr_Node_Log.txt: Size differs [12.11.2023 01:33:28][ℹ️][tdarr] Starting tdarr... (try #1) done! [12.11.2023 01:33:30][ℹ️][tdarr_node] Stopping tdarr_node... done! (took 5 seconds) [12.11.2023 01:33:35][ℹ️][tdarr_node] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:33:35][ℹ️][tdarr_node] Calculated volumes to back up: /mnt/cache/appdata/tdarr/logs, /mnt/cache/appdata/tdarr/configs [12.11.2023 01:33:35][ℹ️][tdarr_node] Backing up tdarr_node... [12.11.2023 01:33:35][ℹ️][tdarr_node] Backup created without issues [12.11.2023 01:33:35][ℹ️][tdarr_node] Verifying backup... [12.11.2023 01:33:35][ℹ️][tdarr_node] Starting tdarr_node... (try #1) done! [12.11.2023 01:33:38][ℹ️][Unraid-Cloudflared-Tunnel] Stopping Unraid-Cloudflared-Tunnel... done! (took 10 seconds) [12.11.2023 01:33:48][ℹ️][Unraid-Cloudflared-Tunnel] Should NOT backup external volumes, sanitizing them... [12.11.2023 01:33:48][ℹ️][Unraid-Cloudflared-Tunnel] Unraid-Cloudflared-Tunnel does not have any volume to back up! Skipping [12.11.2023 01:33:48][ℹ️][Unraid-Cloudflared-Tunnel] Starting Unraid-Cloudflared-Tunnel... (try #1) done! [12.11.2023 01:33:51][ℹ️][Main] Starting Docker auto-update check... [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'bazarr' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'binhex-delugevpn' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'binhex-krusader' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'binhex-readarr' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'binhex-sabnzbdvpn' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'calibre' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'firefox' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'jellyfin' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'netdata' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'Notifiarr' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'overseerr2' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'overseerr4k' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'plex' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'plex-meta-manager' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'prowlarr' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'radarr2' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'radarr4K' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'scrutiny' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'sonarr2' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'sonarr4K' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'tautulli' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'tdarr' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'tdarr_node' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Auto-Update for 'Unraid-Cloudflared-Tunnel' is enabled but no update is available. [12.11.2023 01:34:04][ℹ️][Main] Docker update check finished! [12.11.2023 01:34:04][⚠️][Main] An error occurred during backup! RETENTION WILL NOT BE CHECKED! Please review the log. If you need further assistance, ask in the support forum. [12.11.2023 01:34:04][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;) [12.11.2023 01:34:04][ℹ️][Main] ❤️
  13. You are correct, it works like normal on a single device and ZFS master shows it as degraded. Thanks for your help
  14. I have a zfs cache pool with two mirrored drives and I'm trying to convert it back to a single drive and keep my cache files in that drive. Any guide on how I can go about this? Do I just change the pool size to a single slot?
  15. I have a cache pool with two 512GB mirrored devices however one is now failing with increasing pending sectors. I decided to replace it with a 500GB SSD but I'm now getting the error "cache - replacement device too small" My initial assumption was that unraid defaults to the smallest size drive, is this wrong? Cache drives are ZFS Cache 1 - 512GB failed, attempting to replace with 500GB Cache 2 - 512GB working drive Any ideas on how I can resolve this? Or do I need to make sure I get matching sizes?