Jump to content

JorgeB

Moderators
  • Posts

    67,893
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. That model disk, besides being SMR as mentioned, have been found to have inconsistent performance, sometimes one disk can be slower than others of the same model, so possibly just a disk issue.
  2. Try rebooting, then immediately running blkdiscard -f /dev/sdX then try formatting again.
  3. Currently set shutdown timeout is not enough, change from 90 to 150 secs (Settings -> Disk Settings), you can also stop the array and time how long it takes, than add a few seconds to that.
  4. First thing to do it to stop anything else accessing the array and see the difference it makes.
  5. Depends on the damage, post new diags to see if a valid filesystem is being detected.
  6. Not really, if you can't have a backup of everything try at least to have of the most important stuff.
  7. Latest one should be fine, it should be from the date/time you last rebooted.
  8. Depends on your preference, usually after everything is on cache I recommend changing to cache=only.
  9. This means it's detecting an unclean shutdown, posting the diagnostics auto saved in the flash drive (/logs folder) might give some clues.
  10. You mean running xfs_repair on /dev/sdx1 correct? Unfortunately that was the most likely result but still worth a try, the strange part is that clearly there's an XFS filesystem on that disk: Nov 30 20:35:46 Tower kernel: XFS (md3): Mounting V5 Filesystem Nov 30 20:35:47 Tower kernel: XFS (md3): Metadata CRC error detected at xfs_inobt_read_verify+0x12/0x5a [xfs], xfs_inobt block 0x18 Nov 30 20:35:47 Tower kernel: XFS (md3): Unmount and run xfs_repair Nov 30 20:35:47 Tower kernel: XFS (md3): First 128 bytes of corrupted metadata buffer: Nov 30 20:35:47 Tower kernel: 00000000: 11 00 04 7f 00 00 00 fc 57 de 24 68 e1 01 a6 03 ........W.$h.... Nov 30 20:35:47 Tower kernel: 00000010: 80 45 1a e6 46 a0 a4 53 00 00 00 01 00 0a 40 9d .E..F..S......@. Nov 30 20:35:47 Tower kernel: 00000020: f0 ac fe df 81 ba bd 46 8d 48 33 94 94 df 43 5d .......F.H3...C] Nov 30 20:35:47 Tower kernel: 00000030: 00 00 00 0b d3 5d 74 2d ff ff ff 7f ff ff bf ff .....]t-........ Nov 30 20:35:47 Tower kernel: 00000040: ff ff ff ff ff ff ff ff ff ff fa 7f ff ff bf ff ................ Nov 30 20:35:47 Tower kernel: 00000050: ff ff ff ff ff ff ff ff ff ff fa 3f ff ff bf ff ...........?.... Nov 30 20:35:47 Tower kernel: 00000060: ff ff ff ff ff ff ff ff ff ff f9 ff ff ff bf ff ................ Nov 30 20:35:47 Tower kernel: 00000070: ff ff ff ff ff ff ff ff ff ff f0 bf ff ff bf ff ................ Nov 30 20:35:47 Tower kernel: XFS (md3): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x7a/0xc7 [xfs]" at daddr 0x18 len 8 error 74 Nov 30 20:35:47 Tower kernel: XFS (md3): Failed to read root inode 0x80, error 117 Nov 30 20:35:47 Tower root: mount: /mnt/disk3: mount(2) system call failed: Structure needs cleaning. But it's not the first time this happens, I guess it might depend on what's damaged or the kind of damage, UFS explorer is probably the best bet to try and recover some data, other option would be contacting the XFS mailing list to see if they can revive the fs.
  11. Looks like it, @mort78 the important part is to disable the VM service and copy the file to the currently defined path.
  12. They should work on the next update, but the next update can take some time to be released, so if replacing them with Samsung devices is an option I would do it, Samsung NVMe devices are known to not have that issue, at least the current models.
  13. That should not make any difference, as long the correct boot device is selected n the BIOS.
  14. Like suspected it needs a quirk because the device doesn't have a unique nsid, note that this is a device problem, it doesn't conform to specs, but it's not the only one, I'll ask LT to add a quirk, same was already done for another 5 or 6 devices, but they can only do that for the next Unraid release.
  15. IIRC there have been similar issues with this controller before, for example after a firmware update, if parity is valid you might be able to rebuild one disk at a time since Unraid will recreate the partition, but if there's more damage than just the partition it might not work, you can test by unassigning one of the data disks and starting the array, then see if the emulated disk mounts.
  16. Nope, only way I know would be to re-format the disk.
  17. One of them is currently bound to vfio-pci, remove the bind and post new diags.
  18. It's missing the NTFS type signature, that's why UD doesn't recognize the file system and there's no mount option, you can see if it mounts manually by specifying the fs: mkdir /x mount -t ntfs /dev/sdf1 /x
×
×
  • Create New...