Jump to content

JorgeB

Moderators
  • Posts

    67,662
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Difficult to say but possible, you should avoid running out of space. It would be cache=yes, you can try, not sure if the mover works with a read only filesystem since it won't be able to delete moved files, also don't forget to disable VM and docker services first, if the mover doesn't work you can use for example rsync or midnight commander (mc on the console).
  2. Cache filesystem is corrupt, best bet is to backup and re-format, you can also try btrfs fsck but it's risky, only do that after anything important is backed up.
  3. After this is done don't forget to clean the extra folder, or it will keep installing those versions even after Unraid uses newer ones.
  4. Yes, only 0 errors are acceptable, and even that not a guarantee there's aren't issues, but any errors is a guarantee there are. Looks like you need an updated GLIBC, download this one, also to the extra folder, then reboot: http://ftp.riken.jp/Linux/slackware/slackware64-current/slackware64/a/aaa_glibc-solibs-2.33-x86_64-2.txz
  5. Sorry, never used one of those NICs, it shouldn't be difficult to find out by googling it, if there's no easy way with Unraid there should be with DOS/Windows.
  6. A full disk write might fix it, but if it does it's difficult to say for how long.
  7. Diags are after rebooting so we can't see what happened but SMART report passed if basically meaningless, there's a pending sector and no SMART tests logged, you should at least run an extended SMART test, if it passes you can rebuild on top.
  8. Note that with just one data drive having parity (once it's synced) won't degrade performance, since it works like raid1, with more than one data drive you can enable turbo write, without controller bottlenecks it will write as fast as the slowest array disk can read/write, at the expense of all disks spinning up for writes.
  9. Looks like the not so uncommon Ryzen onboard SATA controller problem, if the emulated disk is mounting and contents look correct you can rebuild on top, BIOS update might help this not occurring in the future, if it doesn't best bet is to use an ad-don controller (or a different board).
  10. If it was formatted with UD and you assign it as a new single pool device there won't be a problem, but there can't be any "all data on this device will be deleted at array start" warning in red.
  11. Yes, just running memtest won't change or fix anything, it might just confirm if there is a RAM problem.
  12. You just run it, any error will appear in red, like so: But I now think it's more likely a xfs bug.
  13. Possibly it's an xfs bug, you can try installing newer xfsprogs package to see if it can fix it, still good idea to run a few passes of memtest first.
  14. Forgot to mention: dm-10=disk11, dm-11=disk12
  15. Jun 17 09:15:56 Atlantis kernel: XFS (dm-11): Unmount and run xfs_repair Jun 17 09:16:04 Atlantis kernel: XFS (dm-10): Unmount and run xfs_repair Run xfs_repair without -n or nothing will be done. Split level won't interfere with showing the share's data, just with allocation for new data copied to that share.
  16. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=972660
  17. When the rebuild is done reboot and run xfs_repair again on disk1. P.S.: This does nothing, -n is the "no modify" flag, used to do a read only check, nothing is done.
  18. Jun 16 19:11:25 Ruskin kernel: netxen_nic 0000:08:00.0: Flash fw[4.0.406] is < min fw supported[4.0.505]. Please update firmware on flash You need to update the firmware for the driver to support the NICs.
  19. Parity is assigned but invalid, you need to let the parity sync finish or completely unassign it.
  20. That looks like a xfs_repair problem, update Unraid to v6.9.2 and try again.
  21. Yes, first unassign parity and start array, then stop and add new disks.
×
×
  • Create New...