Jump to content

JorgeB

Moderators
  • Posts

    67,797
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. And before it worked with -rc2? If yes it should keep working, please post the diagnostics from -rc2.
  2. Jun 15 07:25:22 Tower emhttpd: shcmd (485): /sbin/wipefs -a /dev/sdc1 Jun 15 07:25:22 Tower root: wipefs: error: /dev/sdc1: probing initialization failed: Device or resource busy Strange, for some reason device failed to wipe due to being busy, with the array still running see if you can do it manually now, by typing: wipefs -a /dev/sdc1 Then stop the array and start again, if it still shows busy disconnect the device physically and start the array.
  3. JorgeB

    NVME error

    I suspect it's the forward slash, or maybe the period, in the ID_MODEL and/or ID_SERIAL that's causing the issues, neither of those is usually part of an NVMe device model, as far as I can remember.
  4. Unfortunately looks like there's nothing relevant logged, I assumed the problem occurred between these last lines? Jun 14 08:14:24 vkhpsrv01 root: Fix Common Problems: Warning: Syslog mirrored to flash Jun 14 12:47:14 vkhpsrv01 webGUI: Successful login user root from 31.4.184.37 Jun 14 19:55:17 vkhpsrv01 webGUI: Successful login user root from 192.168.10.18
  5. But first test with v6.10.3 when it's released since there were changes to NIC detection.
  6. That it's not a common issue, and possibly NIC related, I myself have several dual port Mellanox NICs and never saw that problem, I did see the problem with v6.10.2 where I could not set a Mellanox NIC as eth0, but of course cannot exclude your issue as some kind of bug. Well, this could explain why I've never seen that, I use all my Mellanox NICs as the last NICs in the list, I'll see if I can test that when I have some time.
  7. That's not uncommon, that issue first started with v6.5, but for some users it started happening after updating from v6.8 to v6.9, to others after updating from v6.9 to v6.10.
  8. 6.10.3 fixed the problem where some Mellanox NICs weren't detected or could not be set has eth0, Mellanox NIC that have duplicate MAC addresses are a different issue, and likely a NIC problem, I never had that issue with my dual port Mellanox NICs.
  9. IMHO v6.10.3-rc1 is the most stable of all v6.10.x releases, but if you don't want to update wait for v6.10.3 final which should be release very soon and likely will be basically the same as 6.10.3-rc1, no point in using the older releases that have known issues that are already fixed.
  10. Both passed so they should be OK for now, suggest swapping cables/slots with another drives to rule that out and re-use them.
  11. Jun 14 14:41:57 Server kernel: ata7.00: disabled It dropped again, did you replace both cables? Power and SATA. If that doesn't help try a different SATA port or replace the device.
  12. Jun 14 08:00:08 Server kernel: ata7.00: disabled Cache device dropped offline, check/replace cables and if it comes back post new diags after array start.
  13. That should be unrelated, issue is that with at least some models or in same configs you can't set a Mellanox NIC as eth0, it won't be eth0 after rebooting.
  14. It's normal because of the balance, once it finishes you can remove the other device.
  15. Some strange errors that usually start with this: Jun 8 09:18:04 RAID emhttpd: error: publish, 256: Connection reset by peer (104): read Not sure what this is about but you could try booting the server in safe mode to see if it's plugin related.
  16. It can be Unraid killing the SSD, it could be if there was an unusual high amount of writes, but that's not the case.
  17. P.S. unrelated but I also saw macvlan call traces, those can end up crashing the server. Macvlan call traces are usually the result of having dockers with a custom IP address, switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  18. It does look like the exact same issue, which is strange since so far it's only been found on mostly HP and similar servers from the same era, your hardware is completely different, suggest either disabling VT-d or updating to v6.10.3-rc1 which I'm confident fixes this issue, since you have array auto start disabled you can safely boot the server and update, then reboot, start the array and post new diags (same if you just prefer disabling vt-d for now).
  19. Don't remember any similar reports, get the diagnostics after the problem to see if there's anything relevant logged.
  20. That's some good findings, I don't really have any experience with AD, and there aren't many Unraid users using it, but there have been other issues before, I would suggest you create a bug report with all the great info from the last post to at least bring it to LT's attention, maybe then can at least change the timeout or possibly came up with a better solution.
×
×
  • Create New...