Jump to content

JorgeB

Moderators
  • Posts

    67,572
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Last check was correct, log doesn't cover the previous one, but next one should return zero errors.
  2. You can try this: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=781601
  3. Seems to me like you're possibly generalizing and jumping to conclusions, I've been using 10GbE with various workloads for years in all my servers without issues, as well as many other users, since you never posted diags what NICs are you using?
  4. Try power cycling the server to see if it comes back.
  5. There was an issue with some releases that had Samba aio enable that would prevent the i/o error on checksum error, but it should be disable on current release, what release were you using? Any custom settings?
  6. Not easily, because the partition would remain the original size, you'd need to extend that first before extending the filesystem. Stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, disconnect old failing device, assign the remaining old and new pool devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.
  7. Yes, if like recommended you were not using RAID controllers. Do you now which one is parity? Single or dual parity?
  8. Assuming it was xfs change the fs to xfs to be able to use the GUI check, alternatively you can use the CLI.
  9. Look for Asmedia 1064/1164 or the 5 port JMB585.
  10. So you need to get another board or controller and then re-sync parity, you can't rebuild the missing disk with parity disable.
  11. Not clear by your answer if you swap cables does the problem follow the drive or stay with port/cable?
  12. NVMe device dropped offline: Jan 21 10:23:41 Tower kernel: nvme nvme0: I/O 979 QID 6 timeout, aborting Jan 21 10:23:41 Tower kernel: nvme nvme0: I/O 980 QID 6 timeout, aborting Jan 21 10:23:41 Tower kernel: nvme nvme0: I/O 981 QID 6 timeout, aborting Jan 21 10:23:50 Tower kernel: nvme nvme0: I/O 422 QID 5 timeout, aborting Jan 21 10:23:50 Tower kernel: nvme nvme0: I/O 423 QID 5 timeout, aborting Jan 21 10:24:11 Tower kernel: nvme nvme0: I/O 979 QID 6 timeout, reset controller Jan 21 10:24:41 Tower kernel: nvme nvme0: I/O 7 QID 0 timeout, reset controller Jan 21 10:25:24 Tower kernel: nvme nvme0: Device not ready; aborting reset Jan 21 10:25:24 Tower kernel: nvme nvme0: Abort status: 0x7 ### [PREVIOUS LINE REPEATED 4 TIMES] ### Jan 21 10:25:46 Tower kernel: nvme nvme0: Device not ready; aborting reset Jan 21 10:25:46 Tower kernel: nvme nvme0: Removing after probe failure status: -19 Jan 21 10:26:09 Tower kernel: nvme nvme0: Device not ready; aborting reset
  13. Diags are after rebooting so not much to see, but since parity is disable see if you can get disk3 back, replace/swap cables and try again.
  14. I misunderstood what you said, I though you wanted to copy the file to windows, change it there, then copy back, operating directly on the sectors works, some time ago I tested in a similar way, by using dd on linux to write zeros on top of the data, that will also work.
  15. That won't work, the checksums are created on a block by block basis at write time, if you change the file outside the filesystem and write it back new checksums will be created for that block(s).
  16. You'll get an I/O error and the copy/read will fail, this is standard for any filesystem that does checksumns, zfs is the same, checksum error will also be logged on the syslog.
  17. Recommend you get one the recommended LSI controllers, any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  18. By default system shares are set to NOCOW, this also disables data checksum, any user share will default to COW and data checksums, also you can change that for the system share (though it needs to be recreated), that is set on on the share page settings, currently "auto" means yes, so data checksums are enable.:
  19. 4GB DDR2 DIMMs don't work everywhere, it should work with 4x2GB DIMMs
  20. 2TB 5400rpm hard drive can't do much more than that, you can test with the diskspeed docker.
  21. Diags after rebooting aren't much help, you can try this.
×
×
  • Create New...