Jump to content

JorgeB

Moderators
  • Posts

    67,681
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Appears to be CPU limited, check scaling governor in use: cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor You can also check current CPU freq: watch -n 1 grep MHz /proc/cpuinfo
  2. That suggests a flash drive problem, try re-creating it.
  3. Please post the diagnostics: Tools-> Diagnostics after trying to add the new device.
  4. Enable syslog mirror to flash then post that after a lockup.
  5. sdg is failing and because of that the balance aborted, since there's a lot of data using the single profile on that device you can't just remove it, copy everything you can from the pool then re-format with just the good device.
  6. Diags are after rebooting, so we can't see what happened, but swap the SATA cable with another disk to that port, if it fails again it could be a bad port.
  7. Make sure UEFI boot was enable during flash creation, it's disable by default.
  8. Try another USB controller for the NMVe device, or ideally don't use USB for that.
  9. Don't remember seeing that before, you can try renaming/deleting network-rules.cfg on the flash drive then reboot.
  10. Those errors are usually not cable/connection related, keep an eye on them, if they continue to climb it's not a good sign.
  11. Very likely the Adaptec controller changed the MBR on the disks, don't use RAID controllers with Unraid, please post diags after array start.
  12. We ask to run memtest when checksum errors are detected, that's not your issue, your problem is that one of the NVMe devices dropped offline: Aug 10 21:44:19 Vortex kernel: nvme nvme1: I/O 130 QID 1 timeout, aborting Aug 10 21:44:19 Vortex kernel: nvme nvme1: Abort status: 0x0 Aug 10 21:44:26 Vortex kernel: nvme nvme1: I/O 183 QID 1 timeout, reset controller Aug 10 21:45:33 Vortex kernel: nvme nvme1: I/O 15 QID 0 timeout, reset controller Aug 10 21:46:27 Vortex kernel: nvme nvme1: Device not ready; aborting reset, CSTS=0x1 Reboot/power cycle to see if it comes back online and run a scrub, also see here for better pool monitoring.
  13. Yes, looks like that common issue, upgrading to v6.10 might help, or see here: https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  14. You should always post the complete diagnostics, are you using the corefreq plugin? If yes try without it.
  15. Same place: Settings -> Network Settings -> Interface Rules (reboot required)
  16. Difficult to say, one thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it doesn't start turning on the other services one by one.
  17. Just upgraded a test server and don't see one of my pools in the main GUI page, even after booting in safe mode, though it still works and it shows up in the dashboard: test2-diagnostics-20210811-1502.zip
  18. That's normal if using an fs like exFAT, it doesn't support permissions, you can "skip all" or just disable "preserve attributes" for the operation.
  19. Enable syslog mirror to flash then post that after a crash.
  20. Start by running memtest to see if you have bad RAM, then run a scrub on the cache fs, all corrupt files will be listed in the syslog, they will need to be deleted/restored from backups.
  21. Pool is raid0 so it can't be fixed by redundancy, if corruption is being detected most likely is real, but if everything appears to be working correctly you can copy the vdisk outside the pool then copy it back overwriting the exiting one, that will get rid of the errors, VM must be off.
  22. Enable syslog mirror to flash then post that log after a crash.
×
×
  • Create New...