Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. This is usually a flash drive problem, redoing it should fix it.
  2. I would expect so but will test once rc4 is available. This should be better for performance, no? Though maybe there wouldn't be much of a difference... If md used 512/4096 for I/O limits and 512/4096 or 4096/4096 for I/O hints maybe this wouldn't happen? As I understand it mkfs.xfs uses the I/O hints values to optimize the fs layout, so the md driver showing 4096/131072 for those could cause issues. Possibly best just go back to how it was but don't mind keeping the disk empty for a few days to test if you decide it's worth it.
  3. Nothing obvious, I do see some data corruption detected by btrfs, you should run a scrub and delete/replace any corrupt files, they will be listed in the syslog after the scrub, this corruption can be the result of some hardware issue and related to the crashing, you should also update to rc3.
  4. Definitely a possibility, especially if the noises are coming from multiple drives, if it's just one as logged it could be just a power/connection issue with that one.
  5. Disk looks fine, rebuild on top, ideally not using a USB connection. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  6. If rc1 was stable and no longer is it suggests there's a hardware issue, or some other config problem, you can try to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  7. What I meant was if it was stable with rc1 before, since you rolled back from rc2.
  8. Run a correcting check, then a non correcting one, all without rebooting, if the second one still finds errors post new diags.
  9. You just need to change slot, nothing else.
  10. Did some tests with a 12TB disk and I think I found the issue, or at least what's causing it. Note that large disks formatted with v6.9.x or below don't have this issue after upgrading to v6.10, problem is with large disks formatted in the array with v6.10, looks like it's caused by the MD driver showing the disk as: Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 4096 bytes / 131072 bytes vs v6.9 Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes actual disk values Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes With v6.9.x and earlier the physical sector size also shows as 512 and there's no problem, but the I/O size is also 512 unlike v6.10, where I/O sizes seem wrong, and likely the reason for not getting errors with disks formatted with v6.9 on v6.10. Output of xfs_growfs, note the agcount and agsize differences, also sunit and swidth, not sure if those are relevant (also sector size when formatting without the MD driver but that's expected). Disk formatted using the MD driver in v6.9, output with v6.9 or v6.10: meta-data=/dev/md1 isize=512 agcount=11, agsize=268435455 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2929721331, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Disk formatted using the MD driver in v6.10: meta-data=/dev/md1 isize=512 agcount=32, agsize=91553791 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 bigtime=0 inobtcount=0 data = bsize=4096 blocks=2929721312, imaxpct=5 = sunit=1 swidth=32 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Disk formatted without the MD driver in v6.10: meta-data=/dev/sdb1 isize=512 agcount=11, agsize=268435455 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 bigtime=0 inobtcount=0 data = bsize=4096 blocks=2929721331, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Only disks formatted using the MD driver in v6.10 display the "xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device" error (and experimental warning) after running xfs_growfs, so this looks to me like an Unraid issue, XFS maintainers unlikely to investigate this if the problem only manifests itself after formatting the disk with the out of tree MD driver.
  11. Constant ATA errors on disk9, check/replace/swap cables (power and SATA) and try again.
  12. This was due to an UD plugin update, click on the mountpoint and change to older name.
  13. Not enough info given. Motherboard model? CPU? SATA or NVMe M.2 device?
  14. Never used VMWare, but suspect those disks have more than one filesystem signature, Windows only looks for supported fs, Linux is likely detecting the other one as primary, you can check by looking at the output of blkid If there are multiple fs signatures you should be able to delete VMfs while leaving NTFS, but it's not something I ever did or can help with.
  15. Please use the existing plugin support thread:
  16. Did you run xfs_repair without -n? With -n nothing will be done. Rebuilding a disk won't help with filesystem corruption, if xfs_repair can't fix it there's not much else you can do, except restoring the data from backups if available.
  17. Disk4 is disable and there are errors on parity, both are ST6000NM0014 so not sure which disk you're asking about, but both look healthy and log shows what appear to be some power/connections issue with parity, possibly the same problem with disk4 which was already disable at boot so we can't see what happened.
  18. There shouldn't be any issues after updating the BIOS, of course might also not help with the controller issue.
  19. You should, btrfs is detecting data corruption.
  20. At the time of the diags there was something writing to disk1.
  21. Run a correcting check then a non correcting one to make result is 0.
  22. Use different NICs or wait for a newer release with a newer driver, or if you're on v6.9.2 try v6.10-rc3.
  23. Correct, but not the Mellanox, likely the onboard NICs.
×
×
  • Create New...