Jump to content

JorgeB

Moderators
  • Posts

    67,397
  • Joined

  • Last visited

  • Days Won

    705

Everything posted by JorgeB

  1. It does look like it was disk problem, but since it passed the SMART test it's OK for now, keep an eye on it, disk is getting a little long in the tooth, and does show some warning on SMART.
  2. Please post the diagnostics: Tools -> Diagnostics
  3. Don't understand what you mean by this, disk is disabled but it's online and looks healthy, on the other hand and like suspected disk4 is really failing. Both disks 3 and 4 look empty, if there's no data there best way forward would be to do a new config without disk4 and sync parity.
  4. Those are very slow TLC models, you should go with 3D TLC, I'm using several Crucial MX500 on my pool and they work fine, though they have a know firmware bug.
  5. With Power Supply Idle Control correctly set you should have no issues leaving global c-states enable, and it will help with power/heat.
  6. I believe that is the current status, IIRC thunderbolt support was added starting on v6.8.1. If you end up getting this please report back on how it works.
  7. Edit the VM and make sure "Initial Memory" is the same as "Max Memory".
  8. Diags are after rebooting so can't see what happened, but no valid btrfs filesystems are being detected which suggest one or both devices were wiped, still try these options, on both disks.
  9. Like mentioned using user shares adds some overhead, though not always as pronounced, see this for example, sometimes enabling direct I/O (Settings -> Global Share Settings) helps with this.
  10. So, I think you have now re-assigned all disks as they were before this (parity should still be valid) and you want to rebuild disk2, also you started the array in maintenance mode and luckily for you since a few releases Unraid doesn't automatically start a sync/rebuild in maintenance mode, so no harm done. I didn't re-read the entire thread so if I'm missing something please advise, assuming the above is correct: -Tools -> New Config -> Retain current configuration: All -> Apply -Check all assignments and assign any missing disk(s) if needed -Important - After checking the assignments leave the browser on that page, the "Main" page. -Open an SSH session/use the console and type (don't copy/paste directly from the forum, as sometimes it can insert extra characters): mdcmd set invalidslot 2 29 -Back on the GUI and without refreshing the page, just start the array, do not check the "parity is already valid" box (GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the invalid slot command, but they won't be as long as the procedure was correctly done), disk2 will start rebuilding, disk should mount immediately (probably not in this case) but if it's unmountable don't format, wait for the rebuild to finish and then run a filesystem check
  11. Likely the power went down before the rebuild finished, re-enable the disk to try again. https://wiki.unraid.net/Troubleshooting#Re-enable_the_drive
  12. If it keeps happening without an apparent reason, like an unclean shutdown, you might have a hardware problem, like bad RAM. Docker image still needs to be recreated, on cache or array it doesn't matter.
  13. Since in the OP you only mention one disk I didn't notice at first that there are problems with two disks, the pending sector is from disk4, that's the one that looks to be failing, and the syslog confirms it was likely a disk problem., disk3 was already disabled since boot so we can't see what happened, but that one looks fine, extended test takes several hours, also start one for disk4 and when both finish post new diags.
  14. Why did you remove the diags? Please repost.
  15. You can use the parity swap procedure, it was create for situations like these.
  16. Yes, that's a different problem, and not one I can help with, hopefully someone else will.
  17. You should post the diagnostics instead of screenshots, but there's a pending sector so likely a disk problem, you can run an extended SMART test to confirm.
  18. Some of the disks fail to initialize: Feb 15 03:39:34 PlexServer kernel: sd 17:0:6:0: [sdh] Read Capacity(16) failed: Result: hostbyte=0x00 driverbyte=0x08 Feb 15 03:39:34 PlexServer kernel: not responding... Feb 15 03:39:34 PlexServer kernel: sd 17:0:4:0: [sdf] Read Capacity(16) failed: Result: hostbyte=0x00 driverbyte=0x08 Feb 15 03:39:34 PlexServer kernel: sd 17:0:4:0: [sdf] Sense Key : 0x2 [current] [descriptor] Feb 15 03:39:34 PlexServer kernel: sd 17:0:4:0: [sdf] ASC=0x4 ASCQ=0x2 Feb 15 03:39:34 PlexServer kernel: sd 17:0:6:0: [sdh] Sense Key : 0x2 [current] [descriptor] Feb 15 03:39:34 PlexServer kernel: sd 17:0:7:0: [sdi] Read Capacity(16) failed: Result: hostbyte=0x00 driverbyte=0x08 Feb 15 03:39:34 PlexServer kernel: sd 17:0:7:0: [sdi] Sense Key : 0x2 [current] [descriptor] Feb 15 03:39:34 PlexServer kernel: sd 17:0:7:0: [sdi] ASC=0x4 ASCQ=0x2 Feb 15 03:39:34 PlexServer kernel: sd 17:0:4:0: [sdf] Read Capacity(10) failed: Result: hostbyte=0x00 driverbyte=0x08 Feb 15 03:39:34 PlexServer kernel: sd 17:0:6:0: [sdh] ASC=0x4 ASCQ=0x2 But can't tell you why, do you have a different SAS controller you could test with?
  19. Disk1 is mounting correctly on the diags posted, you do have a corrupt docker image, you need to re-create it.
  20. The 860 QVO is QLC and can only sustain 160MB/s writes after filling the small SLC cache, the 860 EVO is a different story and can write much faster than a HDD.
×
×
  • Create New...