Jump to content

JorgeB

Moderators
  • Posts

    67,831
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. No, but they are likely the result of the previous check, where those devices could not be read, if it's a correcting check let it finish then run another one. Post new diags.
  2. This happens because old disk1 is confusing the pool, it's this line: Wipe or disconnect that disk and the pool will mount again.
  3. A file recovery utility like UFS explorer might help, avoid writing anything else to the disks for now.
  4. If the model/sn stays the same Unraid won't notice, if it changes you can do a new config.
  5. This can happen with some raid controllers, or with an LSI HBA using a very old firmware, please post the diagnostics.
  6. Share config looks OK, please try creating a new test share with use cache = yes, leave all the other settings as default, then see if there's any difference.
  7. I would not call it a bug, the HBA stops responding, this is usually more a hardware issue, could also be firmware related, reboot/reset should bring it back online, until it happens again (or not). More likely to happen during heavy load, like a parity check, do what I recommended and run another check, maybe it was a one time thing.
  8. Switching to ipvlan should fix the problem (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right))
  9. We usually don't like external links, but I checked the log and it was a HBA problem: Jul 17 09:32:48 Tower kernel: mpt2sas_cm1: SAS host is non-operational !!!! No idea if this is the latest firmware, so check the Broadcom site: Jul 17 08:16:29 Tower kernel: mpt2sas_cm1: WarpDrive: FWVersion(113.05.03.01), ChipRevision(0x03), BiosVersion(110.00.01.00) Other than that make sure it's well seated and sufficiently cooled, you can also try a different slot if available.
  10. Jul 31 01:00:57 KENSPLACE nginx: 2022/07/31 01:00:57 [error] 5900#5900: *535614 limiting requests, excess: 20.607 by zone "authlimit", client: 10.2.250.183, server: , request: "PROPFIND /login HTTP/1.1", host: "kensplace" Jul 31 01:00:57 KENSPLACE nginx: 2022/07/31 01:00:57 [error] 5900#5900: *535616 limiting requests, excess: 20.587 by zone "authlimit", client: 10.2.250.183, server: , request: "PROPFIND /login HTTP/1.1", host: "kensplace" Do you know this host? Keeps flooding the syslog.
  11. Initially is logged as a disk problem, but the disk dropped offline so there's no SMART, power cycle the server (just rebooting might not do it) to see if the disk comes back online, if yes post new diags.
  12. Enable the syslog server and post that after a crash.
  13. That suggests the data was moved/deleted, is the server exposed to the internet? Also search the array for a known missing file/folder name in case it was moved.
  14. There's basically nothing in the syslog. As for the sync errors run another check, if the last one was correct is should find 0 errors.
  15. nvme1 (cache2) failed: === START OF SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! - available spare has fallen below threshold - media has been placed in read only mode
  16. Possibly the NIC detection script got confused, probably nothing you did, glad it's working now.
  17. Aug 5 18:38:44 holy-grail shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed. https://forums.unraid.net/bug-reports/stable-releases/683-shfs-error-results-in-lost-mntuser-r939/ Some workarounds discussed there, mostly disable NFS if not needed or you can change everything to SMB, can also be caused by Tdarr if you use that.
  18. There's no eth0 on interface rules, delete/rename /config/network-rules.cfg and reboot.
  19. Try updating to v6.11.0-rc2, the much newer kernel might help if it's some compatibility issue.
  20. The point of using a molex adapter it to make sure there is no 3.3v line connected to the drive, some drives won't power up if there is, google "wd 3.3v pin"
×
×
  • Create New...