Jump to content

JorgeB

Moderators
  • Posts

    67,644
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. There's nothing logged before the crash, that suggests more a hardware issue, one thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  2. The are various NMI related stalls/call traces, this can be a hardware issue or current kernel doesn't like your hardware, if it was working with a previous Unraid release you can downgrade to confirm.
  3. Just need to check that assuming the pool is redundant, can't see since diags were taken with the pool not mounted.
  4. Correct, unless you're using RAID controllers or non standard enclosures, in which case the disks may not mount.
  5. That's the regular syslog, that starts over after every reboot.
  6. Syslog server doesn't save in the diagnostics, it's a separate log file saved in the path you've chosen.
  7. It's a known btrfs bug with an odd number of devices in raid1, space is still fully usable and the error margin diminishes as the pool gets fuller. As for the mover issues, start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=972660
  8. Array devices still can't be trimmed, possibly in the future.
  9. Likely have a hardware problem, you can try enabling the syslog server to see if it catches anything after the lockup.
  10. XFS is the best choice for most users using single device cache, unless you need the btrfs features.
  11. It's not necessary for btrfs pools, still is if you use XFS.
  12. This is normal if you were using a very old firmware, you just nee to do a new config, assign all disks as they were and check "parity is already valid" before array start.
  13. You asked for an expander, we assume you mean SAS expander which can be used with the existing LSI to connect more disks, but you can also add another HBA, see here for recommendations for both:
  14. It means that disk (or a previously used disk with the same identifier) had a non zero number of uncorrectable sectors and now has 0.
  15. Some configs have issues with free space, as is reported by df or statfs as you can see above (Unraid uses statfs), this happens for example with an odd number of devices in raid1 and apparently also happens with raid10 when not using multiples of 4 devices, you still have 3TB of usable space, as reported by btrfs fi usage, and as the pool gets filled free space should also get closer to correct, i.e., when you have 2TB used it should show close to 1TB free.
  16. Check filesystem on disk3, also a good idea to convert all reiserfs disks to XFS, reiser is not recommneded for v6.
  17. It's not complicated but you have to edit all the paths manually.
  18. Only btrfs pools can be uses as pools, like mentioned you need to free up one of your disks, format it inside Unraid then restore the data, then repeat for the other one.
  19. Please post the diagnostics: Tools -> Diagnostics
  20. https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  21. First fix the filesystem, no point in rebuilding if it can't be fixed or there's evident data loss, especially when rebuilding on top of the old disk.
  22. Yoy can either try to repair the filesystem on the emulated disk1 or see if the actual disks mounts correctly with UD, it should since it looks healthy.
  23. Those are usually caused by bad RAM or devices that don't respect write barriers or FUA, due to the very high transition ID difference I would suspect the former, start by running memtest, also make sure cache backups are up to date.
×
×
  • Create New...