Jump to content

bamhm182

Members
  • Content Count

    55
  • Joined

  • Last visited

Community Reputation

3 Neutral

About bamhm182

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I didn't want to bump a thread that has been dormant for a while, then it turns out I didn't have to. Haha. With the 3080 having more info about it, it sounds like it is very well within the realm of possibility that we will see SR-IOV support on the 30-series. I don't know if it works unofficially or not, but I would LOVE to see some official support for this feature. +1
  2. Sorry for the late response. I thought things were slowing down and I'd get a second to really dig into this problem. Boy was I wrong... The power supply is a Corsair CS55M. The backplanes I have provide power over molex, so all my drives (aside from my m.2) are powered from the molex connectors on it. It appears to be a single-rail PSU. Just to see what my max-ish power consumption was, I started up a few hundred `yes` streams and made sure all of my disks were spun up. My UPS said I was pulling around 200W. At ~idle, I'm around 110W. As far as disks go, I have the following: 7x 3.5" Spinning Disk (Molex Power) 1x 2.5" Spinning Disk (Molex Power) 2x NVMe (PCIe Power) 1x SATA M.2 (SATA Power) 3x 2.5" SSD (Molex Power) I haven't done a memtest yet and the server is usually in use. I'll try to remember to start it before bed tonight. I've enabled logging to my USB, but I can never find any sort of crash information there either. I'll do it again and post some information from around the time of the crash. It just kind of instantly craps out, then works when I reboot it again, which is making me think it's something like the PSU going out vs something to do with software. That said, I did run into an issue recently where it just REFUSED to boot. It was giving me exit_boot() and efi_main() failures after grub. I had to try like 10 times before it would finally boot. I don't know that that it related to this, though.
  3. Hello, I have been using Unraid for a long time on an R710 and a custom built server. I recently sold the R710 and moved everything over to the custom built server, and for some reason, I have had random crashes ever sense. The only thing I can think of is that the PSU isn't powerful enough. I have looked through the logs several times and I cannot seem to pinpoint what the issue is from there, but I'm hoping maybe someone else can before I go dump a bunch of money into a new PSU. It isn't ever doing anything insane when it crashes. I just have a couple VMs and Docker containers that are always running in the background. The only one that ever really uses a ton of juice is Plex. Thank you in advance for anyone willing to help me look into this! tardis-diagnostics-20200901-0036.zip
  4. I would recommend against it just because if how loud it's going to be. It's likely to only accept 2TB drives at most. You MIGHT be able to flash an HBA into IT mode to bypass this, but idk if it would be worth it.
  5. Would you be open to it being made with Python? Seems to me that python would be a good choice since it is easily extensible, cross platform, and easier to maintain. Last I looked into it, you could easily build for Linux, Windows, and OS X. The only stipulation is that the OS X executable needs to be made on OS X.
  6. No worries. dlandon is the real hero, I'm just a guy that needed something done and figured out how to get it done. I'm sure he'll fix it very soon.
  7. You can run these commands: cd $(mktemp -d) wget https://raw.githubusercontent.com/dlandon/unassigned.devices/master/unassigned.devices.plg sed -i s/fc6f30a2f824f5b56e367ed73edea275/8796a7b713b0d121385f1452eb7880b1/g unassigned.devices.plg Then you can go to the Plugins tab > Install Plugin, and then paste the file location in there, for example: /tmp/tmp.tivBjv7aFZ/unassigned.devices.plg and click install.
  8. I'm also having the same issues. If you want to fix it yourself right now, download the .plg to somewhere on your server, change the 12th line so that the md5 sum is: 8796a7b713b0d121385f1452eb7880b1 and then tell unRAID to install from that downloaded plg file. Otherwise, I expect it will be fixed shortly.
  9. Alright, so I managed to get back to where I had it before 6.7.0rc-1/2, and I think I figured out that it was crashing there because of how I had the IDs in my syslinux file. I currently have this: append pcie_acs_override=downstream pci-stub.ids=<others>,12ab:0380 vfio-pci.ids=12ab:0380 disable_idle_d3=1 initrd=/bzroot,/bzroot-gui Everything works fine until I go and attach the card to a VM and try to boot it. It says unknown pci header type 127 and hard crashes the computer after a few seconds. I tail'd syslog while this happens as requested in another post with this error and I'm thinking maybe the disable_idle_d3 part is what's getting me, but the guy who made the kernel patch said that was required. Either way, I'll try it without that part tomorrow.
  10. So I'm running into a new issue now (after getting 6.7.0 to cooperate) for some reason if I boot while 12ab:0380 is in the ids, I can click all over the GUI, but as soon as I click on the VMs tab, unRAID hard crashes. I tried reverting to 6.6.6 and running the patched kernel that was working fine before I tried upgrading and now that is doing the exact same thing. All I should need to do to downgrade is copy the files in the previous folder back to /boot, right?
  11. I went to go give it a shot with your suggestions and I noticed that rc2 was available. This worked without any problem. Then, as I come here to say that, it came back. So, my computer was already set up as UEFI and I couldn't find anything in my bios about secure boot. I switched the drive to a USB 2 slot and it booted fine. I put it back to a USB 3 spot and it worked fine. I put my vfio-pci.cfg file back and it died. I removed that file, still dead. Out of all the modes, it seems to boot into safe mode w/o GUI pretty consistently. Either GUI mode seems to always die. Normal CLI mode is a hit or a miss. EDIT: I may have just found the variable. It seems that when I have everything plugged in (mouse/keyboard to PCI USB card, dvi/HDMI to GTX970, Ethernet to 4 port pci nic, Ethernet to onboard, HDMI to onboard) then it dies. With HDMI/mouse/keyboard to onboard, it boots just fine. EDIT 2: I'm wrong. I unplugged everything and slowly plugged things in one by one and rebooting hoping it would die eventually. I now have everything plugged in and no idea why it is doing this seemingly randomly.
  12. I have 32 GB of RAM and a 32 GB flash drive plugged into an adapter plugged into the USB 3 header on my motherboard. I have tried to boot into each of the modes except memtest.
  13. Is there any extra information I can provide to help get an answer to why this is happening since I can't exactly pull diagnostics? I tried reverting to 6.6.6 and doing the upgrade again with the same results.
  14. My install has been booting just fine, and then I upgraded to 6.7.0-rc1 from 6.6.6 and am getting the following when I try to boot from any of my options: Loading /bzimage... ok Loading /bzroot... ok Loading /bzroot-gui... ok exit_boot() failed! efi_main() failed! I have moved my vfio-pci arguments over to the new config file, but other than that, I haven't touched anything since the last reboot. I'm on UEFI. Any ideas? I'm going to go remove that config file and see if that helps any. EDIT: It didn't change anything. Also, my default syslinux.cfg file entry is as follows: label unRAID... kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui I could probably roll back to 6.6.6 and be fine, but I'm going to take the time waiting to set up my bare metal hard drive. Let me know if there is anything you would like me to try. EDIT 2: I rolled back to 6.6.6 and as I predicted, everything was fine again. I then reinstalled 6.7.0-rc1 and it did the same thing. I made sure not to make any changes to syslinux.cfg or vfio-pci.cfg this time.