Jump to content

JorgeB

Moderators
  • Posts

    67,783
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. That's not normal, see you can get the diags by typing "diagnostics" in the console, then attach here.
  2. Forgot to mention, keep in mind that a 500GB + 1TB pool in RAID1 will have 500GB usable.
  3. Cache filesystem crashed during the balance because it ran out of space, problem is that now it's read only so you cant delete anything, reboot and post new diags but you'll likely will need to mount manually with the skip_balance option or it will probably go read only again, alternatively and if you have backups just re-format the pool.
  4. Realtek driver support with Linux has always been hit and miss, look at the post below, it might also help you:
  5. Please use the existing plugin support thread:
  6. Docker image is corrupt, delete and re-create.
  7. You can do a correcting parity check, but if parity is way out of sync it will be faster to re-sync it, to do that stop array, unassign parity, start/stop array, re-assign parity, start array to begin parity sync.
  8. Start by running a single stream iperf test in both directions to see if the issue is network related.
  9. This will happen if for example that share exists (partially or fully) in a single device pool.
  10. We can 't see why the disks got disable because of the reboots but they look healthy, and two disks getting disable at the same time is seldom a device problem, since the emulate disks are mounting and assuming contents look correct you can rebuild on top: https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  11. It should work, assuming there's internet on eth0, maybe I'm missing something, someone else might have an idea.
  12. Once a device gets disable it needs to be rebuilt, but first please post the diagnostics.
  13. That would make sense, hopefully it can be corrected in the BIOS. Sorry, can't help with that, never used NFS.
  14. You need to use different subnets, e.g. use 192.168.0.x for 1Gb and 10.10.10.x for 10Gb, also leave the gateway blank for the 10Gb.
  15. Try a test with the diskspeed docker, parity check must not be ongoing to get correct results, not sure if it still works if you pause it.
  16. More of an annoyance, it only happens when converting pools to RAID5/6, GUI flickers several times before starting the balance, first noticed it in one of my main servers, confirmed in safe mode in a different server using multiple browsers, see attached video: 2022-05-16 17-10-39.mkv
  17. Unraid doesn't stripe data, so no file can be larger than a single disk, not sure how to fix the VM problem, likely you'll need to use some OS specific recovery tools.
  18. That won't be the problem, SAS1 = SATA2, so still 300MB/s per drive, also checked a couple of disks and they are linking @ SATA3
  19. Yes, after starting the array in maintenance mode, and if it asks for -L use it.
  20. You can try this mentioned in the link: If all else fails ask for help on the btrfs mailing list or #btrfs on libera.chat Other than that can't help more with that error, since apparently all superblocks are damaged.
×
×
  • Create New...