Jump to content

JorgeB

Moderators
  • Posts

    67,504
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. You just re-assign them to the correct positions, if all disks are present and parity was valid you can check "parity is already valid" before array start to avoid the parity sync (note: parity2 requires all devices present and in the same positions as they were originally, parity1 just requires all devices)
  2. Possibly if disk4 is really failing, try power cycling the server and then manually get a SMART report, post the output of: smartctl -x /dev/sdX Replace X with correct letter, it might change after rebooting.
  3. Looks like it dropped some time ago and the syslog doesn't show the start of the problem, rebooting should bring it online, if yes you should then run a scrub, see here for more details, also on how to better monitor the pool.
  4. Diags are empty, reboot and post new ones if still issues.
  5. I have a couple of these and these, yes they aren't cheap considering you can get an used 8 port LSI for about the same, but I like them when 5 ports are enough because they are fast, stable and low power/heat, there are some cheaper no name Chinese models, they should work fine, though some cheaper ones might be more prone to CRC errors or other issues.
  6. You can use midnight commander (mc on the console)
  7. Difficult to say if it will help, some users have bad performance with pools, others not, Samsung should work fine with the new alignment though.
  8. There are no multi device xfs pools, you can have single device xfs "pools", multi device pools only with btrfs, or zfs with the zfs plugin, but those won't be available on the GUI.
  9. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  10. Please post the diagnostics: Tools -> Diagnostics
  11. Avoid Marvell controllers, if two ports are enough get an Asmedia/JMB582 two port controller, if you need 4/5 get a five port JMB585 controller.
  12. Yes, it's a problem with the NVMe device: Aug 28 04:36:36 Jefflix kernel: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xffff Aug 28 04:36:36 Jefflix kernel: print_req_error: I/O error, dev nvme0n1, sector 585987336 Aug 28 04:36:36 Jefflix kernel: print_req_error: I/O error, dev nvme0n1, sector 23321408 Aug 28 04:36:36 Jefflix kernel: print_req_error: I/O error, dev nvme0n1, sector 167618128 Aug 28 04:36:36 Jefflix kernel: print_req_error: I/O error, dev nvme0n1, sector 320839472 Aug 28 04:36:36 Jefflix kernel: print_req_error: I/O error, dev nvme0n1, sector 320839488 Aug 28 04:36:36 Jefflix kernel: nvme 0000:01:00.0: Refused to change power state, currently in D3 Aug 28 04:36:36 Jefflix kernel: nvme nvme0: Removing after probe failure status: -19 But it doesn't necessarily mean it's dying, some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, between "append" and before "initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
  13. Disk14 is unmountable, hopefully that can be fixed with xfs_repair once it's done rebuilding, but please post current diags to see if a filesystem is being detected there.
  14. You can just the existing controllers, you just need to replace the 2 miniSAS cables on the chassis with 8 SATA cables to use all 10 slots.
  15. Scrub checks data integrity, and it will bring the dropped SSD up to date with the other one, you can run it by clicking on cache on the main page, then scrub.
  16. Missed that, syslog starts over after every reboot.
  17. No, you need to re-enable the disk. https://wiki.unraid.net/Troubleshooting#Re-enable_the_drive
  18. Re-sync parity, good idea to replace/swap cables before to rule them out if it happens again.
  19. You should also run a scrub and make sure all errors were corrected, mentioned on the linked post.
  20. One of the cache devices (cache1) dropped offline at some point in the past: Aug 28 06:35:33 Emrys kernel: BTRFS info (device sdf1): bdev /dev/sdf1 errs: wr 16291, rd 8662, flush 349, corrupt 0, gen 0 See here for what to do and how to better monitor the pool.
  21. Forgot to mention, CPU is overheating, check/clean cooler.
×
×
  • Create New...