Jump to content

JorgeB

Moderators
  • Posts

    67,797
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. It is, or I wouldn't mention that, it's set at 2133MT/s, it should be at 1866MT/s, see the link above.
  2. One thing I forgot to ask, did you by any change download the diags or even just the syslog from when you were initially running v6.10.2 with vt-d on?
  3. It's a known issue, I can also reproduce it, it's being looked at.
  4. Thanks for the answers. If you used the GUI to go back to the previous release it would mean you were indeed on v6.9.2 As for the current issue, both the docker image and appdata are on disk1, there's a chance some appdata was also on cache, delete and re-create the docker image and hopefully your dockers will be restored. You can still try the cache recovery options linked above, if they don't work you'll need to format it. Also note that as long as you leave vt-d disabled it should be safe to upgrade back to v6.10 after if you want.
  5. There are some recovery options here, but based on the error not very optimistic, cache fs might be lost. Do you mind confirming the sequence of events, my suspicion so far has been that the NIC is connected to this issue, since all 5 or 6 models affected found so far use one, also believe that just having the NIC is not sufficient to have issues, since for example Dell servers with the same NIC appear to be unaffected, but by your description it might not be, so please confirm if this is correct: -You updated from v6.9.2 directly to v6.10.2, never used any earlier v6.10 release -After updating to v6.10.2 you didn't have GUI access, because the NIC was blacklisted due to vt-d being enable, but the array would have still started since autostart is enable -At this point you did a clean shutdown/reboot and disabled vt-d, i.e., you didn't unblacklist the NIC and never used v6.10.2 with vt-d and the NIC enable, this is the most important question, including if the shutdown was clean or not -After disabling vt-d you noticed the issues and downgraded back to v6.9.2 Is all of the above correct?
  6. See below for an explanation of both available writing modes, turbo write is faster at the expense of spinning up all drives for writes.
  7. It's not mdX, that's for array devices, use one of the pool identifiers, either one will do, e.g. '/dev/nvme0n1p1', would also recommend first upgrading to v6.10 since you can use the rescue=all option, then try the 1st option in the FAQ.
  8. First thing you should do is set the RAM to the max officially supported speed for your config, which is 1866MT/s, not 2400MT/s, Ryzen with overclocked RAM is known to in some cases cause data corruption, and btrfs was detecting some, after that still a good idea to run memtest, then there are some recovery options here.
  9. That doesn't make much sense, either the time is wrong or there was another device using that IP.
  10. Please use the existing plugin support thread:
  11. Disk is not giving a SMART report, check replace cables and post new diags after array start.
  12. FYI device order is not important for pools. Can't really help with that, best to make a new post in the KVM section.
  13. Do you have a single or multiple pools?
  14. So far no Dell server has been to be affected by the corruption issue, should be pretty safe to use it even with vt-d enabled if needed, but if you don't need it leave it disabled.
  15. Copy the pools folder from the old config, or just re-assign them.
  16. Unraid driver is still crashing, unclear to me from your first post if it now also crashes with v6.8 or v6.9? if you see this same issue with different Unraid releases, I would really suspect a hardware problem.
  17. JorgeB

    Kaput!

    Add to the append line in syslinux.cfg, e.g: append initrd=/bzroot modprobe.blacklist=i915
  18. First thing I would recommend is to stop overclocking the RAM, see here for max speed depending on config, 1866MT/s in your case, Ryzen with overclock RAM is known to in some cases corrupt data. As for the pool best bet for the pool is to backup and re-format the device, then restore the data.
  19. SMART test passed but it's showing some issues, swap both power/SATA cables or slot with a different disk and post new diags after array start.
  20. Those are not errors, it's just the ssh log, 192.168.50.11 is connecting and disconnecting to your server using ssh.
  21. Missed your edit, you're showing the same log entry as before, new docker image should no longer show that, or just reboot.
  22. If the new disk is the same size rebuild from parity, then run a couple of parity checks.
  23. 2022-06-01T19:11:33-05:00 shirt kernel: macvlan_broadcast+0x116/0x144 [macvlan] 2022-06-01T19:11:33-05:00 shirt kernel: macvlan_process_broadcast+0xc7/0x110 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 and switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
×
×
  • Create New...