Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Just shutdown normally, if more help is needed later post new diags after array start.
  2. If you think "Pre-fail" means an error it doesn't, it's just the attribute type.
  3. PCIe controller is not being detected, this is not a sofware issue, you can try a different PCIe slot or try the controller in a different PC to make sure it's working, also note that controller has a SATA port multiplier and is not recommended for Unraid.
  4. Run a single stream iperf test in both directions to see if it's network related.
  5. They won't, we'd need the diags from that time. It's not that uncommon for some NVMe devices to drop offline, with ore without adapter, sometimes disabling power save helps with that. You should always have backups of anything important.
  6. You can now re-enable disk3: https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself Might be a good idea to replace/swap cables just to rule that out if it happens again.
  7. Found in this thread, was able to reproduce in safe mode to make sure it's not plugin related, how to reproduce: -start with a redundant pool -replace one device -replacement will complete successfully and pool will work normally during/after the replacement -stop/start array and pool will now be unmountable: Apr 10 12:55:48 Test2 emhttpd: shcmd (354): mkdir -p /mnt/cache Apr 10 12:55:48 Test2 emhttpd: /mnt/cache uuid: 601ca645-abb2-463f-881e-074622a7abbb Apr 10 12:55:48 Test2 emhttpd: /mnt/cache found: 2 Apr 10 12:55:48 Test2 emhttpd: /mnt/cache extra: 0 Apr 10 12:55:48 Test2 emhttpd: /mnt/cache missing: 1 Apr 10 12:55:48 Test2 emhttpd: /mnt/cache Label: none uuid: 601ca645-abb2-463f-881e-074622a7abbb Apr 10 12:55:48 Test2 emhttpd: /mnt/cache Total devices 2 FS bytes used 1.00GiB Apr 10 12:55:48 Test2 emhttpd: /mnt/cache devid 1 size 111.79GiB used 5.03GiB path /dev/sdc1 Apr 10 12:55:48 Test2 emhttpd: /mnt/cache devid 3 size 111.79GiB used 5.03GiB path /dev/sde1 Apr 10 12:55:48 Test2 emhttpd: /mnt/cache mount error: Invalid pool config For some reason it's detecting a missing device despite both being available and detected, after rebooting pool mounts normally, marking this urgent not because the bug directly results in data loss but because I'm afraid some users than run into this will start trying to add/remove devices to fix this and end up nuking the pool. test2-diagnostics-20220410-1255.zip
  8. You can first try with a file recovery util, like UFS explorer, it might be able to recover most data.
  9. Good, everything looks normal now, not sure what exactly happened before, but there was a pool replacement bug fixed recently and there might still be some issues left, will see if I can reproduce this when I have some time
  10. Replacement appears to have been successful, not sure why the error, please reboot and post new diags after array start so I can see the new btrfs scan results.
  11. Impossible to say without the diagnostics.
  12. Drives are Formatted with type 2 protection You need to remove this, see below: https://forums.unraid.net/topic/93432-parity-disk-read-errors/?do=findComment&comment=864078
  13. Please use the existing docker support thread:
  14. DOCKER_SUBNET_BR0_2="192.168.50.1/24" DOCKER_GATEWAY_BR0_2="192.168.50.1" DOCKER_ALLOW_ACCESS="yes" DOCKER_DHCP_BR0_2="192.168.50.96/30" DOCKER_AUTO_BR2="no" Delete these line from docker.cfg on the flash drive (config folder), reboot and try again, if that doesn't help suggest deleting/renaming that file and starting over with docker config.
  15. You can also just leave the pool as was and start the array without any assigned devices there, there won't won't be any errors and next time you just need to re-assign the same devices to that pool and Unraid will import it, you do need to start/stop the array to remove it though. I didn't delete the entry, just hid it since I though it was no longer needed, but it's now visible again. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=462135
  16. You can do it by typing: btrfs replace status /mnt/pool_name
  17. It's logged as a disk issue, though the disk looks healthy, recommend running an extended SMART test to confirm, if OK just disable spindown for that disk.
  18. Difficult for me to say, usually disk/controller related, often happens after a power cut. Should be similar. It could be a little slower, but they usually perform about the same as a CMR drive during rebuilds, normal writes is were the performance degradation is usually more obvious.
  19. You can try what I posted above, if that doesn't work not much more you can do.
  20. The type of error, disk dropped offline, also SMART looks healthy.
  21. Looks more like a power/connection problem, check/replace cables. P.S. having an SMR disk for parity can negatively affect performance for all writes to the array.
  22. There shouldn't be much difference, but since the prices should be about the same might as well go with the faster one.
×
×
  • Create New...