Jump to content

JorgeB

Moderators
  • Posts

    67,480
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. No, assign only the other device ( currently sdc) I mean there can't, that's why we start the array first without any cache device assigned.
  2. Doesn't look very good since the pool is being detected as a single device, if it was a redundant pool (default) you can try mounting just the other device, to do that try this: Stop the array, if Docker/VM services are using the cache pool disable them, unassign the current cache device, start array to make Unraid "forget" current cache config, stop array, reassign the other cache device (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), start array. If it doesn't work post new diags.
  3. Disk3 appears to be failing, and most likely is since it's the infamous ST3000DM001, you can run an extended SMART test to confirm.
  4. Run it again without -n or nothing will be done, if it asks for it use -L.
  5. There won't be any output from the command, but there are a couple of ways you can confirm it's working, by looking at the syslog, there will be be something like this: Jun 26 13:38:23 Tower1 kernel: BTRFS info (device nvme1n1p1): enabling free space tree Jun 26 13:38:23 Tower1 kernel: BTRFS info (device nvme1n1p1): using free space tree or checking the output of the "mount" command and looking at your cache device: /dev/nvme1n1p1 on /mnt/cache type btrfs (rw,noatime,nodiratime,space_cache=v2)
  6. Please post the diagnostics: Tools -> Diagnostics
  7. Please post the diagnostics: Tools -> Diagnostics
  8. Problem on disk1 appears to be a disk issue, run an extended SMART test, disk2 looks like a connection problem replace SATA cable (also check power one).
  9. Most likely just garden-variety filesystem corruption, if it happens again soon re-format the flash drive, and if it still happens after that then replace.
  10. That's one of the strange things about this, if previously the 10TB disk was precleared it would be all zeros, so after the parity swap the new extra parity section would still be correct, i.e., all zeros, even if for some reason Unraid was failing to zero the extra space when it did the parity copy.
  11. We converted only the metadata to single profile because the other device was already missing (despite still being assigned) but Unraid couldn't finish deleting it because of the dual metada profiles.
  12. Usually you just need to do this, but for some reason there were dual metadata profiles.
  13. That's normal. There's a checkbox you must click to allow array start with a missing cache device.
  14. My main suspect would be that specific Seagate model (or that disk together with an LSI, it might spindown on a different controller): Model Family: Seagate IronWolf Device Model: ST12000VN0007-2GS116 It appears only disks from that model had issues, at least this time, also I have multiple LSIs from same/similar models and also use that same Intel expander and there are no spindown issues, you can look for a firmware update for those disks, IIRC Seagate release a new firmware for the 10TB IronWolf model because it had issues with LSI controllers, not sure they apply to the 12TB model.
  15. After all is done you can do a complete device trim on the 2TB SSD to completely wipe it, use: blkdiscard /dev/sdX
  16. It's already removed from the pool, I didn't noticed it was still assigned, it should be OK to just unassign it, but to play it safer do this: Stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign the smaller cache device only (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning), re-enable Docker/VMs if needed, start array.
  17. Thanks for doing this, just hope the issue happens again, or it will be all for nothing.
  18. That's what we want. With array started type on the console: btrfs balance start -f -mconvert=single /mnt/cache When done, it should only be a few seconds, stop and re-start array, then after a few minutes check to see if the missing device was deleted, if not post new diags.
  19. Use the current one, but it should be the same with the new beta since AFAIK there aren't any changes that would affect a parity swap.
  20. Locking since OP reposted in the plugin thread.
  21. If the 1TB is marginal/failing it's best to use the parity swap procedure.
  22. You just need to move the vdisk and also need to update the paths in the VM XML.
  23. Sorry, no idea what this error means, you can try again after booting in safe mode.
  24. Mover logging is disable, enable and run again. Also if possible stop dropbox from spamming the log with errors.
×
×
  • Create New...