Jump to content

JorgeB

Moderators
  • Posts

    67,600
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. We don't recommend USB for array or pools devices, this is one reason, but mainly because they are prone to drop offline and USB is generally very bad at error handling.
  2. Looks like a GUI problem, but I can't reproduce the issue, please reboot in safe mode and try again just to rule out any plugins.
  3. Try a manual balance to see if it's a GUI issue, on the console type: btrfs balance start -dconvert=raid6 -mconvert=raid1c3 /mnt/archive_one
  4. You can try again but it will likely continue to corrupt data, if you don't correct parity it should be able to rebuild the disk correctly without corruption.
  5. Likely USB related, USB bridges don't always support spin down and/or SMART.
  6. Strange since SMART is working for the diags, on settings -> disk settings -> global SMART monitoring is the default SMART controller set to automatic? If yes change to a different one, apply, change back to automatic and apply again.
  7. With only one data drive parity works as a mirror, you can assign either one as disk1, the other as parity, and before array start check "parity is already valid".
  8. Yes, Unraid needs to format any array/pool disk before use.
  9. Please use the existing plugin support thread:
  10. Please post the diagnostics: Tools-> Diagnostics
  11. Corrupt files can't be moved by the mover, you should delete them or restore from backup.
  12. Please don't cross-post, continue discussing below:
  13. Make sure sleep/hibernate is disable in the VM.
  14. That's too slow even for SMR, but that specific model has been known to suffer from very bad performance in some cases, you'd need to do some tests to see if it's a specif disk, but yeah if possible avoid SMR for parity, at least it will make it easier to test write performance between SMR and CMR array disks.
  15. Apr 4 04:40:03 floserver kernel: BTRFS error (device nvme0n1p1): error writing primary super block to device 2 Syslog doesn't cover the beginning of the problem but this suggests NVMe devices are dropping offline, see here fore better pool monitoring. Also post new diags after the reboot.
  16. Balance of an empty pool should take a few seconds, balance start command is missing some arguments, unclear why, please start a thread in the general support forum and attach the complete diagnostics.zip
  17. Why are files not being moved by the Mover? These are some common reasons the Mover is not working as expected: If using the mover tuning plugin first thing to do is to check its settings or remove it and try without it just to rule it out. use cache pool option for the share(s) is not correctly set, for 6.11.5 or older see here for more details but basically cache=yes moves data from pool to array, cache=prefer moves data from array to pool, cache=only and cache=no options are not touched by the Mover, for v6.12.0 or newer chech that the shares have the pool as primary storage, array as secondary storage and mover action set to move from pool to array files are open, they already exist or there's not enough space in the destination, enable Mover logging (Settings -> Scheduler -> Mover Settings) and it will show in the syslog what the error is. if it's a not enough space error note that split level overrides allocation method, also minimum free space for the share(s) must be correctly set, usual recommendation is to set it to twice the max file size you expect to copy/move to that share. If none of these help enable Mover logging, run the Mover, download the diagnostics and please attach them to a new or your existing thread in the general support forum, also mention the share name(s) that you want/expect data to be moved.
  18. Yep. Array should be fine, at last mount all disks were clean, btrfs will show any accumulated errors at mount time (unless you clear them), e.g, this was form the pool: Apr 5 16:17:38 M1171-NAS kernel: BTRFS info (device nvme0n1p1): bdev /dev/sdb1 errs: wr 283405, rd 112295, flush 0, corrupt 0, gen 0
  19. You can see here for some numbers, you can daisy chain, bottleneck or not depends on the models used and number of drives.
  20. All info for that is the link above, you can ask if there are any doubts.
  21. That's a problem with the mover tuning plugin, if you change any setting and click apply it should start to work, if it doesn't remove it or post in the existing plugin support thread.
  22. 41:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] (rev 03) Subsystem: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] Kernel driver in use: vfio-pci Kernel modules: nvme 42:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] (rev 03) Subsystem: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] Kernel driver in use: vfio-pci Kernel modules: nvme These two disappeared because they are bound to vfio-pci, unbind them or delete config/vfio-pci.cfg
  23. If they are new corruptions there's still problem, if it only happens on that disk could also be a disk issue, memtest ideally should run for 24h, though in most cases when it detects a problem it will detect it after a few hours, also note that it can't detect all issues, so a negative result is not a confirmation there's no problem, a positive result confirms there's one.
  24. You should check a filesystem if there's an error on syslog about it.
×
×
  • Create New...