Jump to content

JorgeB

Moderators
  • Posts

    67,499
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. It is, just remove one of the devices, see here.
  2. It aborted because there was a read error, i.e. bad sector.
  3. Most likely, but this is getting OT, you can start a general support thread and please include diags.
  4. Yes, /mnt/user0 is the same content as /mnt/user except cache.
  5. fstrim runs on a mounted filesystem, not on the device, e.g. use fstrim -v /mnt/cache Edit to add: if you want to do a full trim on a device you can use: blkdiscard /dev/sdX Note that it will completely wiped.
  6. The symptoms look more like a hardware problem, but you can try running in safe mode without any dockers/VMs for a couple of days to rule out most other issues, if it's not stable as a basic NAS it's most likely hardware problem.
  7. IIRC it's not done at mount time, it's done at decryption time, and Unraid already does that, look for the luksopen command in the syslog, it should include "--allow-discards"
  8. I don't use encryption but pretty sure it already does that. Also mkfs already does a full device trim, with both btrfs and xfs.
  9. Not that I recall, can you move the file manually, if not maybe there will be a more detailed error.
  10. Both times errors only happened on devices connected to the LSI, could be cables, could be power, could be the controller, though the controller would be my last suspect, unless there are doubts if it's be a chinese fake or if it's overheating. As for disk1, saw that a few times, with power but mostly with the SATA plug, sometimes you can reuse the existing cables, with the broken part inside, not ideal but don't known of another way to fix it, bet would be to replace the disk.
  11. Should be OK, don't remember reading anything against that, though doubt you'll notice much performance difference, if any.
  12. I've now confirmed this plugin is interfering with pool device spin down, when possible please take a look at this: https://forums.unraid.net/bug-reports/prereleases/690-beta22-no-spin-down-of-pool-hdds-r976/?do=findComment&comment=10202
  13. Color me embarrassed , just recently I was reprimanding a user by making a bug report without testing in safe mode first, and I did the same, so pool spin down works correctly in safe mode, I need to test which plugin is causing the issue but if I had to guess probably the IPMI plugin.
  14. It shouldn't need regular balancing on current kernel, since kernel 4.15 IIRC
  15. Yes. Those attributes are not cable related, and likely no indication of any issue, but if you want post the diags once the SMART test and rebuild finish.
  16. No, rebuild is always in the original filesystem, though you should convert asap all those reiserfs drives to xfs, reiser is not recommended since v6 was released.
  17. Apparently I recall correctly, found this: https://forums.unraid.net/topic/90127-dell-h310lsi-9240-8i-with-8tb-hdd/?do=findComment&comment=836709
  18. It does a rebuild, but only after mounting all the disks, either the array is really stuck or it's still mounting the array disks, wait a few more hours then post a new syslog so we can see if it advanced.
  19. Unfortunately it's incomplete and missing both files that show disk loads, does the server have a disk activity led? IIRC reiserfs can take a long time to mount after an unclean shutdown, but 2 hours seems too long.
  20. Possibly, try to get the full diags by typing "diagnostics" on the console, it shows if there's any array disk activity.
×
×
  • Create New...