Jump to content

JorgeB

Moderators
  • Posts

    67,779
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. There are a lot of XFS related call traces, it doesn't identify the filesystem so run xfs_repair (without -n) in all XFS filesystems, hard crashing could be related to this:
  2. Start by run chkdsk /f on it.
  3. That's not the problem here, you using HBAs in IT mode, they are just not detecting any disks, likely a connection problem, since they have a BIOS installed you can also look for the disks in the HBA BIOS, if they are no detected there they also won't be detected by Unraid.
  4. Technically yes, but nothing was transferred because the files already existed on destination, and in that case they are still deleted from source, same as if you do it using a different source and dest, e.g.: rsync -av /source /dest All files will be transferred, if you then run: rsync -av --remove-source-files /source /dest nothing will be transferred but all source files will be deleted.
  5. Not for that expander, it's powered by the PCIe interface.
  6. Those are very old, see the link above to update to 20.00.07.00
  7. It can be with RAID controllers, please post the diagnostics.
  8. This one isn't harmless, without a backslash after DLs you're transferring from /mnt/disk6/DLs to /mnt/disk6/DLs, so same source and dest, it would be harmless without --remove-source-files, not with it since nothing will be transferred but everything deleted, assuming you did that in the order posted all data on that share from all disks except the one from disk9 would be deleted.
  9. If you don't need it try disabling IOMMU, error appears to be rated to that with the LSI HBA, since it's a microserver I assume you can't install the HBA in a different PCIe slot, if you can also try that.
  10. IIRC network.cfg is only re-created after changing something, but network-rules.cfg should have been created after the reboot, so not sure what the problem is.
  11. Should still change that, something is using macvlan, and those call traces later usually result in a total server crash.
  12. network-rules.cfg is missing, it should exist when there's more than 1 NIC, delete/rename network.cfg (/boot/config) and reboot to see if it's created.
  13. This, just check you disconnected the correct one.
  14. It should be possible to disable in the BIOS but only if that m.2 slot is shared with a PCIe slot and you can set which one to use.
  15. Read-only, though arguably it should be correcting by default. The latter, block group usage is the value used in -dusage, it means that it will only balance data blocks groups with usage under the set percentage, metadata is not balanced as it's usually not needed, or even recommended unless there's a specific reason to do it. A few more observations: 50% should be a good default for most use cases, pools that have many extents per block group with frequent changes/deletes to some might need a higher value to keep a good data usage ratio, for most cases anything above 75% should be fine, it's this value: So check that value after the scheduled balance runs, once at month should be enough for most cases, and if the usage ratio drops below 75% you can increase the "block group usage" value a little, usually no need to go very high, the higher it's set the more data will be balance resulting in unnecessary wear for flash devices, with a value of 100 all data will be balanced, i.e. re-written, same as a full balance, except full balance also balances the metadata. Also note that the usage ratio by itself is not critical, it should be viewed in conjunction with the available unallocated space, as long and there's some it's not a problem, but it's good practice to keep it under control, if the pool is run close to full, or frequently filled up it might be good to aim for a higher data usage ratio.
  16. If the device is in its own IOMMU group and can be bound to vfio-pci that's also an option, important part is the device not being visible to Unraid.
  17. You can try this, physically disconnect the other NVMe device, the one that is currently unassigned and try again.
  18. Several things crashing, looks more like a hardware issue, or some compatibly issue with the kernel, but since it happens with very different kernels I would think hardware.
  19. That's a good place to start based on the corruption found by btrfs.
  20. Is this to the array or cache? That's about right for an array transfer without turbo write.
  21. May 8 15:38:25 Mongo kernel: macvlan_broadcast+0x116/0x144 [macvlan] May 8 15:38:25 Mongo kernel: macvlan_process_broadcast+0xc7/0x110 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, switching to ipvlan might fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
×
×
  • Create New...