Jump to content

JorgeB

Moderators
  • Posts

    67,459
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. There are various possibilities, you can use snapshots if it's btrfs or just something like rsync.
  2. Yes, you can either use the old flash drive or transfer the key to the new one.
  3. That setting is the default format for any new disks, you don't need to change the individual setting for each disk, default is auto.
  4. What I meant is the possibility that there was no 3.3v line before and there is now, anyway very easy to test.
  5. I would really avoid the SASLP, do you really need 8 extra ports? If 4 or 5 are enough there are other options, you could also use a 4 port LSI with an expander.
  6. Not sure what you mean by this, cache is completely full, you need to move/delete some data, you can use the mover if the shares using it are correctly configured to use cache="yes" so data is moved to the array.
  7. https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  8. It certainly has nothing to do with the key.
  9. It's not the normal default, unclear what might be causing this, but AFAIK you're only the second user this happens to.
  10. If the server is still on after shutdown -r you'll need to force a reboot.
  11. Should work fine, but note that unless the rebuild was done in maintenance mode parity will always be a little out of sync due to mounting the filesystems, though this is usually not a big deal, if for some reason your backup doesn't work you can also try the invalid slot command.
  12. Both diags are just after rebooting so we can't see what happened, next time grab them before rebooting, in the meantime you should upgrade the LSI firmware, latest is 20.00.07.00.
  13. Might be the 3.3v pin issue, google "wd 3.3v sata pin"
  14. On a second look the problem might be the parity disks, those disks are SMR, and while SMR generally work great with Unraid I remember havinbg very bad performance with those specific models, most SMR disks behave normally during sequential writes, not those, they slow down a lot as soon as they fill the small PMR cache and need to write to the SMR zone, even for sequential writes.
  15. Your drives are fine, likely have some conflicting configuration file, I would recommend starting with a clean v6 install and only restore super.dat (assuming you were running a non beta v5 release) and your key from the v5 flash, if it boots correctly like that (all disks should be assigned, that info is on super.dat) you can then restore the other config files one by one or just reconfigure the server.
  16. There's nothing on the syslog posted, you need to let it run and download it after a crash, if it was downloaded after a crash there's nothing there that can point us in the right direction, it could be a hardware issue.
  17. I only tested with one, hence why I asked for other users to test. Also note that like mentioned my information is unofficial, it's not been confirmed by LT, and it might be wrong, or possibly not work for everyone or every situation, and if it is wrong you can only blame me.
  18. Maybe it doesn't work with all dockers, like mentioned it's repeatable and works consistently for me, but I only had the single plex docker installed.
  19. Run it again without -n or nothing will be done, if it asks for it use -L
  20. It should, that suggests a problem with the flash drive, try a different one or recreating that one.
  21. SMR disks usually work fine with Unraid since it's not RAID.
×
×
  • Create New...