Jump to content

JorgeB

Moderators
  • Posts

    67,572
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Errors are logged as an actual disk problem, you should run an extended SMART test.
  2. It started rebuilding disk2, but then there were errors on disk1 and then disk2, disks look fine, and multiple disk errors suggest a power, controller or connection problem.
  3. The other way is also not risky if the array is set to not autostart and you unassign that device before array start.
  4. Yes, if you stop the array and refresh main that device will likely be unassigned, if yes you just need to start the array again, or stop and power down and physically remove the device, then just start the array.
  5. Cache device dropped offline, this is usually a power/connection problem, but yes that link is correct to remove a device (after it comes back online).
  6. And if you now start the array that disk is still unmountable?
  7. Cache errors don't appear on the GUI: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=700582
  8. Recent xfs filesystem use always a few GB for metadata, etc, the larger the disk the larger the usage, you can see the same in disk4.
  9. It's an elusive issue, and not easy to reproduce, but those kind of dockers that directly manipulate files seem to be the ones that are more likely to cause this problem, but unless you can use disk shares for that (that would limit it to one disk or pool) don't known much else you can do.
  10. Run a filesystem check on disk1, but before that replace its cables, there are several ATA errors.
  11. Do you know why you have this in the go file? ### Fix plugins cp -r /usr/local/emhttp/plugins/webGui/phaze.page /usr/local/emhttp/plugins/dynamix If unrelated try redoing your flash drive, backup current one, redo it, restore only super.dat (disk assignments) and the key, if it works reconfigure the server or restore older config part by part until you find the problem.
  12. With the array stopped use wipefs on both cache devices, first the partition then the device (all data there will be lost): wipefs -a /dev/sdX1 then wipefs -a /dev/sdX Start array and there will be an option to format cache.
  13. What options did you use the run the filesystem check? Make sure you do it without -n or nothing will be done, if you did it without -n post the output. Also note that Marvell controllers and SATA port multipliers are not recommend.
  14. Delete and recreate the docker image, but if it keeps getting corrupt out of the blue you likely have a hardware problem, like bad RAM.
  15. That's just the syslog, but if all disks look fine this seems like a good way to go: You still need to do a new config, then sync the new parity.
  16. No, it will show current link speed.
  17. Copy super.dat from the existing flash drive, in the config folder.
  18. Feb 16 15:44:29 iT3640-DURS kernel: ahci 0000:00:17.0: Found 1 remapped NVMe devices. Feb 16 15:44:29 iT3640-DURS kernel: ahci 0000:00:17.0: Switch your BIOS from RAID to AHCI mode to use them.
  19. 1st upgrade parity, then the disk.
×
×
  • Create New...