AjaxMpls

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by AjaxMpls

  1. After my last post, I did delete libvirt.img from /mnt/system/libvert and restore from my backup folder on disk3, and it is now starting. Why is it showing three different paths for the two copies? /mnt/user/backups/libvert/libvirt.img /mnt/user/system/libvirt/libvirt.img /mnt/user0/backups/libvert/libvirt.img /mnt/user0/system/libvirt/libvirt.img /mnt/disk3/backups/libvert/libvirt.img /mnt/disk3/system/libvirt/libvirt.img
  2. I have not deleted libvirt. I just confirmed it is still where it should be, but the VM page is still blank.
  3. I had upgraded my cache pool drive pair to higher capacity drives. Later on I got some additional sata ports & cables so I added the old drives back in for additional storage. Since they were recognized as previously belonging to a cache pool, Unraid barked about an invalid cache pool configuration until I deleted the partitions on the old disks. Then I rebooted and the array started normally. Except my docker containers and VMs did not boot. I tried deleting the docker image, and after doing so, my containers started immediately, even before reinstalling them from community apps. Which seemed weird. But, my VMs still are not loading. I can still see the folders & vdisks under the domains share, so I'm not sure what is missing. I do have backups of libvert & xml files but not sure exactly how to utilize them or if that is necessary in this scenario. unraid-diagnostics-20230320-2142.zip
  4. Thanks for your help, Jorge. I did not have a backup so I recreated that VM and have now configured backups using https://github.com/danioj/unraid-autovmbackup script.
  5. OK, that was really very impressive. All my docker containers appear to be in working order now, and one of my VMs is working properly. I do have a Windows 10 VM that will not boot now, though. Getting bluescreens repeatedly, so I'm guessing that one is a loss. So some lessons learned on this one. I'll get those sketchy cache disks replaced, keep the appdata folder backed up, and what is the best practice for backing up VMs, since snapshots are not supported? I'm fine with shutting the VM down before backing up - just copy the vdisk + libvert? unraid-diagnostics-20230224-1353.zip
  6. Does this mean I need to configure all my containers anew after they are recreated?
  7. Still getting a "Docker service failed to start" message after reboot. unraid-diagnostics-20230224-1202.zip
  8. UUID: 40f53594-6f8d-42e0-afaa-a94807c15c5c Scrub started: Fri Feb 24 11:14:51 2023 Status: finished Duration: 0:17:43 Total to scrub: 778.31GiB Rate: 749.75MiB/s Error summary: verify=13827 csum=1531847 Corrected: 1545674 Uncorrectable: 0 Unverified: 0
  9. Posted. I'm having a hard time from the logs determining which drive is throwing the IO errors as they are both referenced. unraid-diagnostics-20230224-1105.zip
  10. Nice! Yes it did import as a new pool and I can see my shares again. I tried toggling docker services off & on but it's still not starting up. Will I need to reboot again?
  11. oot@unraid:~# btrfs-select-super -s 1 /dev/sdg1 warning, device 5 is missing using SB copy 1, bytenr 67108864 root@unraid:~# btrfs-select-super -s 1 /dev/sdh1 using SB copy 1, bytenr 67108864 root@unraid:~#
  12. root@unraid:~# btrfs-select-super -s 1 /dev/sdg No valid Btrfs found on /dev/sdg ERROR: open ctree failed root@unraid:~# btrfs-select-super -s 1 /dev/sdh No valid Btrfs found on /dev/sdh ERROR: open ctree failed
  13. Thanks, attached. It's sdh & sdg. unraid-diagnostics-20230224-0948.zip
  14. Last night I rebooted again and this time the parity check did finish. It still showed the array stopped but after rebooting once more it looks normal and the array is started and I was able to start the docker service. I do still have the removed cache disks. I had tried mounting them with Unassigned Devices without luck. Any other suggestions to recover the data?
  15. I had a pair of Crucial MX500 SSDs in RAID1 for my cache pool and had noticed the logs filling up with IO errors. Thinking these errors were related to the previously reported BTRFS problems with the Crucial firmware, I planned to replace the cache drives with another brand. In the interim, I applied the update from 6.11.3 to 6.11.5 and rebooted. Upon rebooting, the array refused to start, complaining the cache drives were unmountable. I foolishly tried removing and re-adding the drives to the pool but applied them in the wrong position and lost the MBRs - cache pool was a total loss. In any case, I am now trying to rebuild with my replacement cache drive installed. I precleared it, formatted, and added to the cache pool, and click Start array. The array does not start, though. The parity check begins but the array stays offline. I tried letting the parity check complete, thinking maybe the array would start after the check is completed, but the parity check just hangs at 29.1% complete. I let it sit at that level of completion for about an hour but no further progress was had. I've rebooted again and am still getting the same behavior. Parity check starts but array does not. I am also getting a message about a stale configuration. I'm not sure where to go from here. unraid-diagnostics-20230223-1858.zip