Jump to content

JorgeB

Moderators
  • Posts

    67,884
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. That suggests a plugin issue, uninstall all plugins (or rename all *.plg files) to see if you can find the culprit.
  2. Didn't see that part but that's not why it's not working, to fix it: Stop the array, stop Docker/VM services, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign all cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs services, start array.
  3. Disk dropped offline so there's no SMART, but this is usually a power/connection problem, check/replace cables and post new diags. You should enable system notifications so you are notified as soon as there's a problem.
  4. Any image type file,like vdisks, docker image, etc can grow with time, looks like you've moved the vdisks, try recreating the docker image if there, there are also reports that defragging the filesystem helps, as long as you are not doing snapshots.
  5. You are correct, behavior changed in the current release, going to find out when and report it since like you've experience this is a problem. Not really my area but I'll also inquire about that, it would make things easier if that's possible to do.
  6. That should not happen, VMs should never autostart if array autostart is disabled, it's a constant user complain, i.e., "VMs don't start with autostart enable", because they only start at first boot if array auto start is enabled.
  7. SMB with V6.11 should perform much better than earlier releases, try and see.
  8. If you disable array auto start the VMs won't auto start after boot.
  9. shfs[10455]: segfault at 10 ip shfs crashed, looks NFS related, if you can disable NFS.
  10. If you have a chance please try updating back to v6.11.5 but then start the array in safe mode, to rule out any plugin interference.
  11. Still having issues with sdh which I assume is your cache device.
  12. Looks like this issue: https://forums.unraid.net/bug-reports/stable-releases/crashes-since-updating-to-v611x-r2153/?do=getNewComment&d=2&id=2153
  13. Stop the docker service and copy from the appdatabackup folder to appdata, you can use midnight commander or the dynamix file explorer
  14. Assuming you are using btrfs only for the pool(s) you can try clearing the space cache, it should be pretty safe to run but always good to make sure backups are up to date: With the array stopped: btrfs check --clear-space-cache v2 /dev/sdX1 If this is an old pool that might have v1 space cache also run: btrfs check --clear-space-cache v1 /dev/sdX1 It won't hurt if there's no v1 cache
  15. You cannot remove a drive from a pool using the single profile (or other non redundant profile), please post current diags after a new reboot and before array start.
  16. MC won't work AFAIK, if you have enough space on cache you can use mv /mnt/path/to/vdisk.img /mnt/path/to/vdisk.old cp --sparse=always /mnt/path/to/vdisk.old /mnt/path/to/vdisk.img rm /mnt/path/to/vdisk.old If there's not enough space on cache move to array then copy back with --sparse=always.
  17. Disk problem, IMHO best bet is to clone that disk with ddrescue then run xfs_repair again, you can then used the cloned disk with old disk3 since that one looks healthy and re-sync parity.
  18. Can you see where in the boot process it reboots? Also safe boot still works correct?
  19. Not sure, don't remember previous complains, but let me see if I can duplicate that.
×
×
  • Create New...