Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. -stop array -unassign the cache device -start array -in the console type: btrfs balance start -f -dconvert=single -mconvert=single /mnt/disk2 -once that finishes type: btrfs dev del /dev/nvme0n1p1 /mnt/disk2 -once that's done stop the array -re-assign the cache device -start array, cache will be unmountable, format to start using it.
  2. In my experience those Kingston A400 SSDS are some of the worst models out there performance wise, if the pool is just the Samsung device does it perform better?
  3. Yes, like suspected both devices are in the same pool, we can delete one device form the pool, in the end do you want that data to be in disk2 or cache?
  4. You need to use the CLI, terminal window or the console.
  5. You can get that with: xfs_repair -V I don't know how to get that info, if it's not included in the snippet above you can ask him how to get it, you can also mention that the filesystem was expanded, possibly more than once, and if you remember the original size (when the filesystem was first created) also mention that. In any case I don't believe this is an issue, even if that experimental feature is enable Unraid never shrinks a filesystem, it only grows it.
  6. That is done by Unraid at every mount, if you look for example at disk2 same is done but there's no warning.
  7. Hope you don't mind english, the above suggests both devices are in the same pool, you can confirm by posting the output of: btrfs fi usage -T /mnt/disk2
  8. You can use the Unbalance plugin to move the data to other disks.
  9. They shouldn't. You have to manually move the data then shrink the array.
  10. It's not logged as a disk issue, and the disk looks healthy, check/replace cables or swap slots and if the emulated disk is mounting rebuild on top.
  11. Looks like the typical onboard controller issue with some Ryzen boards, look for a BIOS updated or use and add-on controller, also note that using USB devices for array or cache is not recommended, and there are also USB disconnects in the log.
  12. Errors are logged as a disk issue, but since the SMART test passed it's OK for now, keep monitoring, especially this attribute: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 2 If it keeps climbing you'll likely get more read errors, in that case replace the disk.
  13. Cache filesystem is corrupt and crashing on mount, if there's nothing important there clear the devices with blkdiscard then re-format the pool. blkdiscard /dev/nvme0n1 and blkdiscard /dev/sdX
  14. https://xfs.org/index.php/XFS_email_list_and_archives Posting the kernel version and relevant syslog snippet should be enough to start, can also mention that it appears on disk1 but not in the similar disk2, they will ask for more info if needed. Linux version 5.14.15-Unraid Mar 6 19:59:21 tdm emhttpd: shcmd (81): mkdir -p /mnt/disk1 Mar 6 19:59:21 tdm emhttpd: shcmd (82): mount -t xfs -o noatime /dev/md1 /mnt/disk1 Mar 6 19:59:21 tdm kernel: SGI XFS with ACLs, security attributes, no debug enabled Mar 6 19:59:21 tdm kernel: XFS (md1): Mounting V5 Filesystem Mar 6 19:59:21 tdm kernel: XFS (md1): Ending clean mount Mar 6 19:59:21 tdm emhttpd: shcmd (83): xfs_growfs /mnt/disk1 Mar 6 19:59:21 tdm kernel: xfs filesystem being mounted at /mnt/disk1 supports timestamps until 2038 (0x7fffffff) Mar 6 19:59:21 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 6 19:59:21 tdm root: meta-data=/dev/md1 isize=512 agcount=32, agsize=137330687 blks Mar 6 19:59:21 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 6 19:59:21 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 6 19:59:21 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 6 19:59:21 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 6 19:59:21 tdm root: = sunit=1 swidth=32 blks Mar 6 19:59:21 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 6 19:59:21 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 6 19:59:21 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 6 19:59:21 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 6 19:59:21 tdm emhttpd: shcmd (83): exit status: 1 Mar 6 19:59:21 tdm emhttpd: shcmd (84): mkdir -p /mnt/disk2 Mar 6 19:59:21 tdm kernel: XFS (md1): EXPERIMENTAL online shrink feature in use. Use at your own risk! Mar 6 19:59:21 tdm emhttpd: shcmd (85): mount -t xfs -o noatime /dev/md2 /mnt/disk2 Mar 6 19:59:21 tdm kernel: XFS (md2): Mounting V5 Filesystem Mar 6 19:59:22 tdm kernel: XFS (md2): Ending clean mount Mar 6 19:59:22 tdm kernel: xfs filesystem being mounted at /mnt/disk2 supports timestamps until 2038 (0x7fffffff) Mar 6 19:59:22 tdm emhttpd: shcmd (86): xfs_growfs /mnt/disk2 Mar 6 19:59:22 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 6 19:59:22 tdm root: meta-data=/dev/md2 isize=512 agcount=32, agsize=137330687 blks Mar 6 19:59:22 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 6 19:59:22 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 6 19:59:22 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 6 19:59:22 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 6 19:59:22 tdm root: = sunit=1 swidth=32 blks Mar 6 19:59:22 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 6 19:59:22 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 6 19:59:22 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 6 19:59:22 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 6 19:59:22 tdm emhttpd: shcmd (86): exit status: 1
  15. Multiple NMI events logged, these are usually a hardware issue.
  16. If you mean extended attributes It doesn't, you need -X for that.
  17. Oops, sorry, forgot old parity was failing, like trurl mentioned in you need another drive.
  18. This warning started to appear to some users lately, looks like it happens mostly with large filesystems, curiously disk2 is the same size and looks to be using all the same options and is not showing the warning, you might want to ask in the XFS mailing list to see why the warning is there and if it's a concern.
  19. Nothing obvious in the diags, enable the syslog server and post that after a crash.
  20. Nothing logged, which suggests an external issue, try increasing verbosity with rsync, use: rsync -avh instead of -rh (-a option already includes -r and other archive appropriate options)
  21. Please post the diagnostics after the issue occurs.
  22. If current parity is 3TB then no, you can't use a 10TB in the array, use the old parity to clone the bad disk then use the 10TB as the new parity when you do the new config.
  23. Latest Unraid releases require root to have a password for SSH to work, then use that password.
  24. Mar 2 18:34:33 Zigplex2 kernel: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 Mar 2 18:34:33 Zigplex2 kernel: nvme 0000:08:00.0: enabling device (0000 -> 0002) Mar 2 18:34:33 Zigplex2 kernel: nvme nvme0: Removing after probe failure status: -19 NVMe device is dropping offline, look for a BIOS update, try a different PCIe/M.2 slot if available, or try the below, if nothing helps best to try a different model device (or board). Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
×
×
  • Create New...