sport

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by sport

  1. Well I found a gap in my household to delet the network config file and reboot. That cleared up the ghost nic, but also dropped the onboard realtek that was there as the secondary nic. The primary Intel came through with flying colors. The system seems to see the Realtek pci device but does not show up in the network settings. diagnostics-20220203-2038.zip
  2. I have noticed an issue with the default route doesn't seem to be taking the primary Nic / Network and I am wondering if it has anything to do with the Network.cfg file, which I believe to be inaccurate or missconfigured; perhaps from myself from some testig I was doing at some point and have just forgoten. To start I only have 2 nics in play for the unraid system itself and the Intel nic is what I consider to the primary vs the realtek. The default route should be via the 10.71.74 network, but is not. The routhing table and the network.cfg don't seem to jive with each other. The network.cfg (below) seems to identify 3 sysnics but as I pointed out there is only 2. Is there an easy way to get things to rights again?
  3. ok. The OS uses the "default" route? If so how does one change that default route? Thanks for the quick reply.
  4. This pluggin is not using the eth0 but (main system) rather eth1 which is not the segment I want it using. How do I chage that, as I do not see that in the gui. UnRaid v6.9.1
  5. I implemented the changes suggested by bonienl, as they were the least invasive to start with, and removing the gateway 10.71.72.1 resolved the internet connectivity issue. As a second I also changed subnet of the secondary interface so things were not overlapping. Thanks for the quick feedback.
  6. Seems I have had a similar issue as in a post from : in this thread he found that in his network.cfg he had a duplication with the settings, and bonienl had him go in and edit the file to remove the nic that was not supposed to be there. I believe I have a similar situation, where I have a nic listed that is not presented (eth1), and I can remove that entry and see what happens. But in that post bonienl also said to change the sysnics= What is that parameter setting? I am sure I am going to have to adjust it. I have included a copy of my network.cfg below: # Generated settings: IFNAME[0]="br0" DHCP_KEEPRESOLV="yes" DNS_SERVER1="10.71.74.1" DNS_SERVER2="8.8.8.8" DNS_SERVER3="9.9.9.9" DHCP6_KEEPRESOLV="no" BRNAME[0]="br0" BRNICS[0]="eth0" BRSTP[0]="no" BRFD[0]="0" DESCRIPTION[0]="Primary Intel Nic " PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="10.71.74.23" NETMASK[0]="255.255.255.0" GATEWAY[0]="10.71.74.1" USE_DHCP6[0]="yes" IFNAME[1]="br1" BRNAME[1]="br1" BRNICS[1]="eth1" BRSTP[1]="no" BRFD[1]="0" DESCRIPTION[1]="Secondary interface Intel (Blue)" USE_DHCP[1]="no" IPADDR[1]="10.71.72.24" NETMASK[1]="255.255.255.0" IFNAME[2]="br2" BRNAME[2]="br2" BRSTP[2]="no" BRFD[2]="0" BRNICS[2]="eth2" DESCRIPTION[2]="OnBoard Realtek Nic" PROTOCOL[2]="ipv4" USE_DHCP[2]="no" IPADDR[2]="10.71.72.6" NETMASK[2]="255.255.255.248" GATEWAY[2]="10.71.72.1" SYSNICS="3"
  7. I have two nic's enabled for Unraid, one onboard and one addon intel nic. I also have another intel nic with two interfaces, which I have excluded from Unraid and is passed through to a PFSense VM. The two interfaces for Unraid are eth0 and eth2, currently as per the network settings page. But for some reason an old interface is still in the network.cfg file "eth1" with br1. since this is not a valid interface can I safely remove the associated lines from the cfg file? (see attached). Also how do I set the "default route", I assume it should point to the gateway on eth0, which is the primary interface Unraid, but for some reason it was pointed to gateway on eth2. I deleted the default route and I was then able update dockers etc again. network.cfg
  8. Using the terminal I used the command sudo ip link and found the br1 listed. I then ran sudo ip link del name br1 which seems to have removed the route from the routing table. Question now is, have I followed the right path or have I just created a trouble for myself?
  9. I have two nic's installed for use with Uraid (6.6.6). the nic's are designate eth0 (Primary Unraid access) and eth2 (Optional) bridge member for br0 is etho bridge member for br2 is eth2 In the routing table (attached), there is an entry for br1 which I do believe is conflicting with the entry for br2, but I am not able to delete the br1 entry. I have a small management subnet for my switches and AP's, I would like to use the Unifi docker to manage the AP's, but am not able to get it to communicate over br2 with a static ip in the supported range. I have even tried assigning the br2 to one of the virtual PC's and seeing if I could ping the other devices, but was not successful It is possible that I created the custom br1 sometime ago for use on the some virtual machines, but can not remember just how or where I went to do that. I might have edited a cfg file via the terminal, but not sure. I would greatly appreciate any help in straightening things out.
  10. ... regarding a saved backup job, obviously its first run would be a full, but is each scheduled run after that an incremental type backup or is it running a full backup each time?
  11. I believe I solved the problem. I booted with the PCIe ACS Override setting enabled and issue went away. Is there another way I could have handled this?
  12. I am passing an Intel NIC [8086:10d3] 06:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection through to a VM, but when I try to start the VM I get the following error, not sure what I'm doing wrong. Any assistance would be appreciated: internal error: process exited while connecting to monitor: 2017-10-17T21:31:14.632470Z qemu-system-x86_64: -device vfio-pci,host=06:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: error, group 13 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.2017-10-17T21:31:14.632496Z qemu-system-x86_64: -device vfio-pci,host=06:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: failed to get group 132017-10-17T21:31:14.632515Z qemu-system-x86_64: -device vfio-pci,host=06:00.0,id=hostdev0,bus=pci.2,addr=0x4: Device initialization failed
  13. Appreciate all the good feed back, it was very informative !
  14. I went back looked at the use case slide 5 and saw how things were implemented for the physical segregation and that would work for my current lab setup. In looking at slide 6 - logical segregation, and translating that into what steps to take in the GUI, is that what is happening when you add eth1 for instance to the bridge on eth0? Can a different network be assigned to eth1/br1 or does it have to be on the same network as eth0?
  15. I executed the below instructions from Limetech's post "unRAID Server Release 6.2 Stable Release Available " and substituting in the vender / product ID from one of my spare NIIC's that I have not included in any bridging and or bonding; left it in a down state, but am not getting the NIC showing up as an available PCI device. has any one tried this process? Not sure what I am missing, any assistance would be greatly appreciated. Login to your server using the unRAID webGui Navigate to the Tools -> System Devices page Locate the PCI device you wish to stub and then copy the vendor and product ID specified in brackets near the end of it's row. Example (the bolded part highlights the vendor/product ID): Quote 01:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01) Navigate to the Main tab and click on your Flash device Under the Syslinux Configuration section, locate the line that says "menu default" Beneath that line, you will see the following: Quote append initrd=/bzroot Change the line, adding the bolded part as shown in the example below: Quote append vfio-pci.ids=8086:1528 initrd=/bzroot Click Apply and then reboot your system.
  16. I am using the Back-UPS ES 550 no issues. I have both the URaid server and my desktop and small switch plugged into the battery side. since the UPS can only be plugged into one machine, is there a way for me to have the PC shut down as well, in the event that service power goes out and things go to Battery?
  17. Yes, I just installed the same UPS today and I too do not get a "UPS Load" or "Nominal Power". I do however show the UPS Load %
  18. - When setting the path in the Plex Docker for the media user share, does it need read and write? - If I were to go and set the share that houses the "Media" to Secure, would I then have to provide the Docker with the associated credential? Or, does it have access via unRaid at a system level? I have just recently taken the plunge into unRaid, after trying FreeNAS and I must say it is a really nice product with a community that is very supportive. Looking forward to really getting into this product. Cheers!
  19. does anyone know if there is a way to make the Intel NIC primary, but keep the onboard enabled so that it could be used directly with a VM?
  20. I was all set to commit to FreeNAS and then I saw a video from Linus TechTips where they were show casing unRAID, boy am I glad I did. I have just finished playing with a test install and love the simplicity of the unRAID product, even with its robustness, as I was running an ESXI host so that I could run a a lab environment as well as the NAS. After seeing the unRAID I realized it could give me the NAS solution I was looking for and still allow for my virtual lab and the user forum is so much more friendly here compared to the FreeNAS forums. I had added a dual Intel 1GB Nic to my system which I want to use as the primary on the NAS side, but I am not sure how to specify it over the on-board broadcom nic. The onboard nic I would want to pass through to VM's. The Intel outperforms the Broadcom Secondly I intend to us a cache with two 120G ssd's, what happens if the space runs out does it start moving items over the array instead of waiting till the scheduled time? would not be an issue during normal operations but things will be exagerated durring the initial loading/moving of data to the new NAS. I reviewed the manual, but these two questions did not seem to be covered in the material. Looking forward to really stretching the legs on this product. Derek My current rig is: MOBO: ASROCK 970 Extreme R2 CPU: AMD FX-6300 MEM: 16GB DDR3 1600 4 x 1TB Seagate 7200 (Array) 2 x 240GB SSD (Cache) 1 x onboard 1GB nic(broadcom) 1 x PCIe 1GB nic (broadcom) 1 x PCIe dual 1GB nic (Intel)