Jump to content

JorgeB

Moderators
  • Posts

    67,696
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Can't help anymore as I don't use encryption, do you really need that? If not I would recommend getting rid of it, it's just one more thing that can cause issues.
  2. It's the last option on the Unraid boot menu, it doesn't work if booting UEFI, only CSM.
  3. There were some writes missing to disk1, likely due to the cable issues, you can try btrfs restore (option #2) to recover the data, disk will need to be formatted after, this error is fatal, and if you haven't yet you should first run memtest like mentioned above.
  4. Spinning down the drives won't stop the writes, they will just spin back up, you want to stop whatever is writing to the array, or don't but the sync will take much longer.
  5. There are writes going on to disks 1 and 2, stop those and the speed should return to normal.
  6. Nothing logged before the crash, this usually points to a hardware issue.
  7. Yes, if btrfs restore works you just restore those files to cache again after it's formatted.
  8. This error means some writes were lost, usually due to bad firmware, it's a fatal error but usually btrfs restore (option #2) can recover some/most data.
  9. You should post in the UD plugin support thread, rsync doesn't touch the partitions.
  10. Docker image is mounting and docker appears to start correctly but then it tries and fails to unmount the docker image: Sep 27 21:59:01 DoomSlayer emhttpd: shcmd (72): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 50 Sep 27 21:59:03 DoomSlayer kernel: BTRFS: device fsid 9a31b0ac-3775-42f8-927b-f3f0a5294490 devid 1 transid 4584976 /dev/loop2 scanned by udevd (5218) Sep 27 21:59:03 DoomSlayer kernel: BTRFS info (device loop2): using free space tree Sep 27 21:59:03 DoomSlayer kernel: BTRFS info (device loop2): has skinny extents Sep 27 21:59:04 DoomSlayer root: Resize '/var/lib/docker' of 'max' Sep 27 21:59:04 DoomSlayer emhttpd: shcmd (74): /etc/rc.d/rc.docker start Sep 27 21:59:04 DoomSlayer root: starting dockerd ... Sep 27 21:59:07 DoomSlayer kernel: Bridge firewalling registered Sep 27 21:59:07 DoomSlayer avahi-daemon[4744]: Joining mDNS multicast group on interface hassio.IPv4 with address 172.30.32.1. Sep 27 21:59:07 DoomSlayer avahi-daemon[4744]: New relevant interface hassio.IPv4 for mDNS. Sep 27 21:59:07 DoomSlayer avahi-daemon[4744]: Registering new address record for 172.30.32.1 on hassio.IPv4. Sep 27 21:59:07 DoomSlayer avahi-daemon[4744]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Sep 27 21:59:07 DoomSlayer avahi-daemon[4744]: New relevant interface docker0.IPv4 for mDNS. Sep 27 21:59:07 DoomSlayer avahi-daemon[4744]: Registering new address record for 172.17.0.1 on docker0.IPv4. Sep 27 21:59:23 DoomSlayer emhttpd: shcmd (76): umount /var/lib/docker Sep 27 21:59:23 DoomSlayer root: umount: /var/lib/docker: target is busy. Sep 27 21:59:23 DoomSlayer emhttpd: shcmd (76): exit status: 32 No clue what the problem is, someone else might have an idea.
  11. This suggests the problem is a plugin or user script, remove them all, I would start with disabling any user scripts, to see if you can find the culprit.
  12. Reboot and post new diags after array start if it still doesn't work.
  13. Don't see any issues logged, one more thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  14. You can try to repairing the GPT partition with gdisk, but I would first clone it with dd then try that on the clone.
  15. Disk has pending sectors and failed the SMART test, it needs to be replaced.
  16. Yes, if you're going to replace the controllers soon I would do it after, you can do both at the same time, no extra risk since they are already disabled.
  17. Disks look fine, you're using SASLP controllers, these are not recommended for a long time and known to drop disks without a reason, if possible replace them with LSI HBAs.
  18. You can check balance status after clicking on cache in the main page.
  19. This isn't a bug, it should be posted in the forum, you're passing thought the NVMe cache device to the VM, so when the VM starts that device becomes unavailable to Unraid, edit the VM XML and remove that.
  20. Please post the complete SMART report or better yet the diagnostics.
  21. It's logged as a disk problem, run an extended SMART test.
  22. A few sync errors are normal, just mounting and unmounting a filesystem will cause that, so the parity was in sync with the emulated disk, not the actual disk, you can run a non correcting check to confirm but all should be fine.
  23. Yes, didn't notice they were caused by spin ups.
×
×
  • Create New...