Jump to content

kayjay010101

Members
  • Content Count

    20
  • Joined

  • Last visited

Community Reputation

1 Neutral

About kayjay010101

  • Rank
    Member
  1. THANK YOU! Copied over every bz* file in the 6.8.3 zip and booted again, Web UI is back up on the old stick! Thanks again for your help!
  2. Thanks for the advice. In the meantime after posting this post I tested with a new stick running a fresh install of unraid. No problems, it got a DHCP from the ISP router and I can access the Web UI from my laptop. I am using a quadport NIC for the PfSense VM- but there's two NICs built into the motherboard that I actually use for Unraid, the quadport card is passed through. When I connected to the NICs on the motherboard itself it is now working with DHCP. This did not work on the old stick. The old stick still recognized a link when I ran without a network.cfg file, and the activity LEDs on the sockets were blinking like they should. It just never got an IP assigned. Running the network.cfg file that has been working for months the activity LEDs stayed the same- but there was no link detected on any NIC according to ethtool. This leads me to believe the install on the old stick is broken, and not a hardware issue, and as you mention I am missing an ethtool.txt file. What would be my steps for fixing this issue? Is there a way to rebuild the key, but save my settings and assignments? What files would I need to re-add after reinstalling to keep my stuff in case I decide to do that? Or is there something else we should try first, like adding a new ethtool.txt file? Also, what could have happened to cause this issue? All I did was move the server and plug everything back in the way it was. Thanks for the help
  3. Just connected up the old ISP router (ew) and it's working for every device apart from the unraid box. DHCP doesn't work when booting without a network.cfg file and when using the old one it says link not detected. I'm at my wits end here
  4. Hey everyone, would love some help as I'm about to pull my hair out after 3 days of troubleshooting. Recently moved my server into a new room as I was adding a disk shelf and the noise would be unbearable, so it had to go into the storage room. Anyway, after turning the server on again after the move, it seemed that the web UI never was available. So I tried pinging the IP, and nothing. All right, shut it down, plug in the GPU from my main computer and check the console. All looks fine, it still has the same IP and everything. But it's not possible to ping anything on the network. And since unraid runs my pfsense, I have no internet connection. I've tried ifconfig, which shows a 0.0.0.0 broadcast address? Is this right or irrelevant? Also, I tried ethtool br0 (I have a bridged interface: my 10GbE card is bridged with the main motherboard NIC and I run 10GbE downstairs to my main computer) which states "link detected: no". Strange, as I've now tried every single ethernet port on the server. Each on the 4port Intel card, and both on the motherboard. They all say no link detected. I always connect them to the same switch that then goes to my laptop, and the laptop is able to communicate with the rest of the network no problem. I also see the disk shelf from my laptop, which again goes through the same switch. The entire network is on 10.0.0.x, the server being 10.0.0.41, laptop 10.0.0.4, and disk shelf 10.0.0.5. Gateway is 10.0.0.1, which is my pfsense vm, but pfsense never runs, of course. Another strange thing to note is when I run in safe mode with GUI the localhost thing that pops up doesn't get a connection. So not even the server itself can get a connection to the web UI. And yes, I've tried multiple cables and a direct connection from the laptop to the server. Link still not detected.. it's 100% not a physical problem as far as I can see. But I'm open to be proven wrong, of course. The ethernet NIC on the laptop is set as IP: 10.0.0.4, mask: 255.255.255.0 and gateway 10.0.0.1. WiFi card is set to 10.0.0.184, same mask and gateway 10.0.0.179. Neither has gotten any contact with the unraid server. When trying connection with the ethernet card I disable wifi just in case there's some interference. What I've also tried which might or might not be relevant: chkdsk on the flash drive. No problems. downloaded 6.8.3 files and replaced bzroot, bzimage and bzroot-gui on the flash drive. Did this because bzroot was stuck at one point and I couldn't even boot the server. This was prior to any network problems. renamed the network.cfg file to network.old.cfg in order to try to get unraid to create a new network.cfg file. With this config unraid just got a 169. address, which makes sense as I have no DHCP server and it's DHCP as default. Taking the flash out and checking it, there was no network.cfg file created so I couldn't set a static IP... Tried just creating a new one with the settings that should be correct. tried changing gateway to 10.0.0.179 which was what my other laptop had gotten as its gateway (I guess because that IP belongs to an AP I have?), but still no luck. I've attached my diagnostics zip. tower-diagnostics-20200731-1839.zip
  5. Maybe dedicate two to a PfSense VM (WAN in, LAN out) then have the two remaining as main and fallover in case of one dying. Or for separating networks. One for IOT stuff that's separate from the rest of the network for security reasons, for example. That being said, getting a quad ethernet card off ebay is like $50 at most and you won't have to deal with separating the NICs if they're in the same IOMMU group. And the saved money by going with a dual NIC motherboard instead could maybe cover the cost of buying a quadport NIC. I have a dual NIC Supermicro board with a quadport ethernet card for my PfSense VM. The only ports I actually use are one on the motherboard for Unraid, and then two on the quadport card for WAN in and LAN out for PfSense.
  6. unRAID does not require a GPU to boot. My server runs without a GPU entirely. Having a server that is meant to be headless require a GPU would be a dumb decision
  7. 128GB DDR4 2400MHz ECC. Got it for free from work, apparently our datacenter has a massive DDR4 surplus and actually a DDR3 deficit. So they just have a bunch of DDR4 ECC lying around.
  8. Can I ask why you went for 4x4TB drives instead of going with something like 2x8TB? The 4TB drives will draw more power overall, take more space, and you're limited to only 4TB if you want to have parity protection.
  9. Strange, going under my drives it does say "Enabled" under SMART. Are you running HP-branded drives? Also, did you get temperatures per drive to work in any capacity?
  10. Did you manage to get SMART and drive temps to work in Unraid with the P2000 G3? I've managed to get everything setup myself, but drives lack SMART capability and report 0C as their temp. SMART works according to the drive shelf's web interface, but as always with HPE products it's severely lacking in ease of use, and I'd like to have it reported in Unraid.
  11. Not one that communicates over U.2 and I have no M.2 slots as of now, I actually have a ASUS Hyper M.2 card on the way and will transfer two NVMe SSDs into that soon and will test then. For now that drive is sitting idle. I think it's a driver issue, as this is probably not an ssd anybody else has been using on unraid. Found some drivers on the huawei website, and I heard limetech could include them in the next version of Unraid if I pop them an email, but I'm not sure about Linux drivers and distros and such, so I don't know if these drivers would work with Unraid or not. Linux drivers for Huawei es3000 v3 SSDs
  12. Upgraded to 69-beta1, still getting the issue when I ran badblocks on it.
  13. Attached are diagnostics and syslogs. Seems to me like the NVMe drive I attached today has some sort of issue. Mover seems to be taking an awful long time as well because of this, as well as there being general slowdowns on my dockers. The NVMe drive is attached with a PCIe to SFF-8482 adapter that came with when I bought it on eBay. Not sure if this is because of a bad drive, bad adapter or some incompatibility with unraid/linux/whatever! Would love some help. srvunraid-syslog-20200314-2016.zip srvunraid-diagnostics-20200314-2116.zip
  14. It actually is set to 192.168.11.150, I changed it for some reason prior to taking the screenshot but I changed it back afterwards. The Unraid UI is running on 10.0.0.41 anyhow
  15. The 10Gb NIC goes right to my computer, so it wouldn't have access to the internet, right? So not really an option? These are the only settings that pertain to IP adresses in the container, they're all set to the 192.168.11.x address: There are also more port settings under "More settings...", but these specifically say NOT to change: