quattro

Members
  • Posts

    21
  • Joined

  • Last visited

quattro's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Has anyone else ever gotten this error? clean install, getting this error when doing an initial build: Finished - added 0 files. Duration: 00:00:01 I had the plugin installed before and it successfully did an initial build and export. But after upgrading the plugin I began getting this error. I removed the plugin, deleted the plugin folder under \boot\config\plugins and rebooted, but still getting the error. Logs are empty.
  2. Strange as I've always unmounted before removing it, but thanks for the quick reply. Was just hoping to avoid a reboot.
  3. I inserted a usb drive I've been using with unassigned devices for months and got a "reboot" button instead of a mount button. What caused this and is there a way to avoid rebooting?
  4. That's a beautiful explanation of what's going on. I too got the "being reconstructed" message which is very confusing. Is reconstruction different from rebuilding? I found instructions for adding back a drive that was disabled but appears to be good from SMART data (I suspect delayed spin up of a drive when a scheduled parity check started or a cable/power issue). The instructions I found say 1)stop the array 2) remove the disabled drive from the array 3)start the array 4)stop the array 5) assign the disabled disk to the empty slot 6)start the array in maintenance mode 7) click sync Could I just do steps 1 through 6 and not trigger the rebuild with the sync button? Why is the word reconstruct being used in this message instead of rebuild? A quick google of unraid reconstruct seems to only pull up info on reconstruct rewrite. When it comes to repairs, rebuild always seem to be the preferred terminology. What exactly is happening when this message is triggered. Thanks!
  5. Thanks for continuing to help with this issue. I'm sure you can imagine it's been frustrating, so my apologies if I at all seemed unappreciative. I always understood it was a different driver, but thought perhaps the workaround might not be driver specific since it's just a blank file (even though it's the name of the driver config file). I guess I kinda figured the blank tg3.conf was just turning off iommu passthrough or something similar. Thanks for clarifying exactly how the fix behaves, even though it's been kinda obvious that's what it was doing, it was the only thing I tried that worked. Imagine getting to a thread with some fixes posted and those suddenly fix your issue that has kept a system down for weeks. Prior to that, I tried EVERYTHING under the sun to troubleshoot it. Like I said, clearly those changes are(seem to be) affecting my system, and it's too regular to give up and conclude it's just a coincidence that my machine boots when I make those changes. I will troubleshoot some more this weekend and get some diagnostics. Thanks again for your assistance.
  6. Except that it's not a coincidence. I can reproduce it every time. If I enable virtualization I get the eth0 doesn't exist error. If I disable virtualization it doesn't work until I create the blank tg3.conf. I've done it 6 times just know. Multiple reboots after disabling Vt-d won't fix the eth0 does not exist error. The second I touch the tg3.conf it works the next boot. I'm fully aware of JorgeB's efforts, you can refer to my post thanking him for taking the time to look at my issue. My forum account is new, but this is not my first day at the rodeo. I'm also aware of the driver used. However the blank tg3.conf was a workaround specifically for the "eth0 doesn't exist" issue that was caused by Unraid disabling the NIC on those system that were documented. I'm aware my config hasn't been tested or documented, but that doesn't mean it's impossible that it's affected by the Vt-d issue. I'm primarily posting in detail to help others who might come across the same problem as it completely disables unraid. The Dell 7010/9010 platform is insanely popular for projects like unraid that are designed to utilize old hardware. The posts that others made helped me find a fix for my system, which was completely disabled due to this bug. Unraid is disabling my NIC, I'm certainlly curious as to why it's happening. Remeber, win10 boots fine all day long with Vt-d enabled.
  7. So, if I go into the bios and enable Vt-d, it won't disable my NIC? What if you're wrong?
  8. Sure seems to have been the culprit with my Intel NIC. NOTHING else would fix it after weeks of trying.
  9. So I got the guts to reboot today, no problem. The blank tg3.conf seems to have fixed the issue that has been plaguing me for weeks. I'm feeling gutsy, so will attempt an update to 6.10.3 😮
  10. Cool, I appreciate your time. It never really felt like a driver issue, but who knows. I finally got the gust to reboot and still have network. So I'm still standing by my assessment that it's not hardware, but I've been troubleshooting for far too many years to be foolish enough to think that a hardware issue in this scenario is impossible. I'll have time for more testing in a few days, and I'll update here in case it helps anyone else.
  11. I was referring more to NICs being disabled, then the corruption issue that triggered the code change. Another question, if I have a NIC problem(I'm assuming you meant hardware), are you thinking there is something in 6.10 that might fix it? If I don't have a hardware issue with my NIC, what do you think happened?
  12. I thoroughly ruled out all physical NIC issues, it's perfectly fine in windows. It's been working for 24 hours so far after the touch command. I'm running 6.9.2 and I'm not upgrading to 6.10 until the NIC issue is clearly documented. I booted about 15 times and got eth0 does not exist every time. I then logged into unraid and ran the touch command and booted with a successful DHCP IP assignment.
  13. Thank you very much for taking the time to look at my diagnostics and logs. I greatly appreciate it, given how frustrating the last month or so has been with unraid. Any idea why would it suddenly fail to initialize after working for months? And why would it suddenly work again after disabling virtualization in the bios. And then fail again after a simple reboot? I had already created the blank tg3.conf file on the usb using windows. But I found a thread mentioning creating the file at the unraid console and suddenly it boots again. All I did was enter this command at the unraid console after logging in: touch /boot/config/modprobe.d/tg3.conf To be honest, I'm afraid to do anything now that it's running with no explanation of why it has been down most of the time for nearly a month. Why would 6.9.2 be doing this? Isn't the tg3 thread all about issues in 6.10? is the Intel 82579LM one of the NICs with corruption issues. Around the time the network issue began I was getting weird parity errors.