Jump to content

JorgeB

Moderators
  • Posts

    67,397
  • Joined

  • Last visited

  • Days Won

    705

Everything posted by JorgeB

  1. They can be, CRC errors can escalate to real errors, so first thing is to fix that.
  2. Docker image can be easily recreated, but you do need the original appdata if you want to keep everything as was.
  3. Intel SAS expander is already on the latest firmware, LSI should be updated, though the version you're on is not known to be especially problematic, still it is quite old.
  4. Cache has dual data profiles, that suggests one of the devices dropped offline at some point, can you get the diags, using the GUI or by typing diagnostics on the console?
  5. Devices are tracked by serial number, you just need to replace the controller and power back on.
  6. Main difference between is that 9207 supports PCIe 3.0, since your board is PCIe 2.0 they will work the same, and like mentioned for normal HDDs 9211 is more than enough, unless using a SAS expander which is not the case, so unless you're thinking on some future upgrade I would go with the cheaper 9211.
  7. Yes, same error, another thing you could try would be running the server in safe mode to rule out any plugin.
  8. Unfortunately diags are after rebooting, disk looks mostly OK but there are some warnings on SMART, if it happens again save diags before rebooting.
  9. I mentioned running out of memory, not memory errors, but these last diags don't have that and still there's the same error: Jan 29 00:31:46 Tower shfs: shfs: ../lib/fuse.c:1450: unlink_node: Assertion `node->nlookup > 1' failed. I have no idea what this error means, don't remember seeing it before, maybe @limetechknows what this is about?
  10. OK, in that case suggest swapping both cables (or slot) with another disk to rule that out and then re-sync parity.
  11. That's what happens by default, vdisks are created sparse.
  12. You can use: cp --sparse=always /path/to/source /path/to/dest
  13. This. Also still a good idea to run memtest.
  14. It wouldn't assuming the problem happens with the array stopped, if not you'd need to re-assign the drives same as they were, still wouldn't affect your data, as long as you don't assign a data drive as parity.
  15. There's no risk in doing a new config, unless you assign a data disk as parity, and it's even more safe lately with the keep assignments option. You could do a new config, re-order data drives as you wish, unassign parity2, check parity is already valid before starting the array and it would be protected since start, then after starting the array at least one time re-add parity2 and sync it.
  16. No, I didn't even noticed that, I'm saying to try a new config, i..e, another flash with stock Unraid, to make sure it's not a config issue.
  17. Pool is full of checksum errors, you need to backup anything important and reformat, there are some recovery options here if needed. So many errors suggest a hardware problem, btrfs doesn't do well with RAM issues, it would be a good idea to run memtest, also make sure RAM isn't overclocked, respect max speed depending on config:
  18. And if you use a new empty flash drive? Can be with a trial key.
  19. That is just for the cache pool, but if you're using single parity you can move drives around after doing a new config while keeping parity valid.
  20. That's a flash drive problem, run chkdsk, if more issues it could be failing, also make you're using a USB 2.0 port.
  21. Please post the diagnostics: Tools -> Diagnostics
×
×
  • Create New...