Jump to content

JorgeB

Moderators
  • Posts

    67,600
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Both will read the complete disk surface, so it does put some stress on the disks, but nothing major if they are working as they should.
  2. Please use the existing plugin support thread:
  3. Most likely some corruption is still there, but you can run a scrub to confirm.
  4. Looks more like a connection/power problem, replace/swap cables to rule them out if it happens again.
  5. https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  6. Unfortunately memtest not finding errors doesn't guarantee that there's not a problem, only the opposite is true, data corruption like that is most often caused by RAM, so I would recommend using different one or at least running those an the standard JEDEC settings instead of the overclocked XMP profile.
  7. Looks like disks are not unmounting, try stopping the array (instead of pressing reboot) to see where it gets stuck.
  8. Parity cant help with this since it will also be corrupt.
  9. When Unraid does an unclean shutdown it saves the diags in the flash drive, post the latest ones.
  10. Both disks are showing data corruption (disk1 also read and write errors): Mar 14 13:59:36 Unraid kernel: BTRFS info (device md1): bdev /dev/md1 errs: wr 35, rd 175, flush 0, corrupt 57, gen 0 ... Mar 14 13:59:40 Unraid kernel: BTRFS info (device md2): bdev /dev/md2 errs: wr 0, rd 0, flush 0, corrupt 504, gen 0 This suggests a hardware problem, start by running memtest, then there are some recovery options here.
  11. In that case it's obviously corrupt, and don't forget to get rid of that Marvell + PMP controller
  12. Without the diags can't be sure but most likely the rebuilt disk will be corrupt due to errors on parity. You can still try a filesystem check but I would only do it after fixing the current issues.
  13. Try this: https://forums.unraid.net/topic/76732-parity-check-running-when-starting-array-every-time/?do=findComment&comment=957569
  14. Problems with parity look more like a connection/power problem, replace cables, disk3 looks like a controller problem, avoid Marvell controllers and SATA port multipliers, especially together, see here for a list of recommended controllers. Diags don't show how you enabled disk3, did you rebuild or do a new config?
  15. After you clear the largest disk you just need to click on it, change filesystem to xfs, format and restore the data.
  16. I'm not seeing any issues with a cache device going offline in the time covered by the syslog, it is using single profile though, and it shows some previous data corruption in one of the devices, you should run a scrub, some more info here. After fixing the pool reboot, reproduce the problem and post new diags.
  17. Ryzen with RAM clocked above max supported speed is known to corrupt data in some cases, I would start there: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  18. I would start by disabling parity so you can test each of the array disks.
  19. Unraid only disables as many disks as there are parity devices.
  20. Yes, so if you then add or replace a CMR data disk it won't be penalized by the SMR parity.
  21. CMR is always better if possible, but SMR will work if it's mostly for reads, just expect slower performance during writes, even for large files and depending on the disks it can be 3 or 4 times slower, and if possible avoid SMR for parity, or it will be slow writing to any disk, even CMR ones.
×
×
  • Create New...