Jump to content

JorgeB

Moderators
  • Posts

    67,696
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Seagates ae also SMR, and even if they are likely to perform a little better than the WDs, they are still limited by the WD parity disks.
  2. Also note that on the diskspeed graph all the WD SMR drives are not performing normally during reads, SMR doesn't affect reads, and they should perform similarly to the Seagates, but are much slower, which is common on those models after some time, disk6 is likely mostly factory clear, or it was wiped, speed is much higher because it's not reading the actual disk surface, it's also normal on those disks.
  3. Unraid is not RAID, it doesn't stripe writes, it writes to only one disk at a time, and with SMR disks writes speeds of 5MB/s are normal, I have some myself.
  4. Make sure power supply control is set to idle in the BIOS, see the pinned FAQ thread for more info.
  5. You're using SMR drives, and the WD20SPZX particularly is know to be a bad performer.
  6. You can resync parity but like mentioned using USB drives you're likely to run into more issues in the near future.
  7. Sep 15 18:55:29 Elliot kernel: sd 10:0:3:0: Power-on or device reset occurred This is usually a power/connection problem.
  8. Usually not, but if there are many small files/blocks deleted regularly it can cause this, since only completely free chunks get removed automatically.
  9. File system is fully allocated, run a balance: https://forums.unraid.net/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551
  10. GUI is showing the correct used space, note that not all tools display correct usage for a btrfs filesystem, e.g. du, you can always confirm with: btrfs fi usage -T /mnt/cache Pool: cache Overall: Device size: 465.76GiB Device allocated: 307.02GiB Device unallocated: 158.74GiB Device missing: 0.00B Used: 279.46GiB Free (estimated): 185.37GiB (min: 185.37GiB) Free (statfs, df): 185.37GiB Data ratio: 1.00 Metadata ratio: 1.00 Global reserve: 447.55MiB (used: 0.00B) Multiple profiles: no Data Metadata System Id Path single single single Unallocated -- --------- --------- -------- -------- ----------- 1 /dev/sdd1 305.01GiB 2.01GiB 4.00MiB 158.74GiB -- --------- --------- -------- -------- ----------- Total 305.01GiB 2.01GiB 4.00MiB 158.74GiB Used 278.38GiB 1.08GiB 64.00KiB Used: 279.46GiB = 300GB
  11. You should really avoid USB devices for array or pools, there are lots of disconnect errors on the log, reboot, start array and post new diags.
  12. Looks more like a power/connection problem, check/replace cables and if the emulated disk is still mounting rebuild on top. You also need to check filesystem on disk2.
  13. Still need to run without -n, xfs_repair should always finish, with more or less data loss, if it doesn't there's a problem with the tool itself.
  14. LSI probably won't support optical drives.
  15. btrfs restore doesn't check file checksums, it's a last resort type of thing, so data integrity might be compromised, if you have backups best to use those.
  16. What option did you use to restore the data, read-only mount or btrfs restore? Yes.
  17. You need to run without -n or nothing will be done.
  18. Unclear, try to move those files manually this time.
  19. Rebooting might fix it, if it doesn't post new diags after a reboot and array start. P.S. there are multiple ATA errors in the logs, as well as a lot of other spam, check cables on those disks.
  20. Even if the disks aren't failing, and it really looks like they are, the rebuilt disk will still be corrupt due to the read errors.
×
×
  • Create New...