Jump to content

JorgeB

Moderators
  • Posts

    67,871
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Great, I assume you know that completely filliping up a filesystem can be an issue, though usually not problem with btrfs if it's a WORM kind of scenario..
  2. Also try to boot in safe mode after updating to see if it makes a difference.
  3. You can get the diags from the console, type 'diagnostics'.
  4. Parity swap if for when there's a disabled disk and the replacement is larger then current parity, you just mentioned doing a parity upgrade.
  5. Just need these steps, then start the array, there's no downtime.
  6. Check the mappings for Sonarr/Raddar, if they are using /mnt/cache/share instead of /mnt/user/share data will go to cache.
  7. You cannot add devices to a xfs pool, they are singe device only, you can devices to btrfs pools, and you don't do a new config, just stop the array, add new device, start array.
  8. That's normal if you are transferring to the array with the default write mode, try turbo write.
  9. https://forums.unraid.net/topic/129447-611x-array-must-be-started-to-message-when-the-array-is-started/?do=findComment&comment=1180017
  10. That's a strange one, disable IOMMU just to test if it makes any difference, you can still create VMs just not pass-through any PCIe devices.
  11. It's the issue below, only rebooting will fix it:
  12. Settings -> Scheduler -> Parity Check
  13. Did you change the filesystem by clicking on cache on the main GUI page or by editing a config file? Please reboot and post new diags after array start.
  14. Nothing special other than in general avoid SMR drives, Seagate Ironwolf/Ironwolf Pro, WD Red (for 8TB or larger) and Toshiba N300 are good options.
  15. Parity should be OK for now, but those are low end SMR drives, and based on the number of issues I've seen with them they are probably not the most reliable Seagate drive.
  16. mount -t btrfs -o noatime,space_cache=v2 /dev/sdb1 /mnt/media It's still set to btrfs, click on the pool and change the fs to xfs (or auto)
  17. Yes this is the same problem, you can try running without the plugins listed above and/or switching to ipvlan, we are still investigation, and still not clear what causes this issue.
  18. This is likely a different issue, unless you get the same call trace as in the OP, and there's not one in the diags posted.
×
×
  • Create New...