sport

Members
  • Posts

    23
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    New Brunswick, Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sport's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Well I found a gap in my household to delet the network config file and reboot. That cleared up the ghost nic, but also dropped the onboard realtek that was there as the secondary nic. The primary Intel came through with flying colors. The system seems to see the Realtek pci device but does not show up in the network settings. diagnostics-20220203-2038.zip
  2. I have noticed an issue with the default route doesn't seem to be taking the primary Nic / Network and I am wondering if it has anything to do with the Network.cfg file, which I believe to be inaccurate or missconfigured; perhaps from myself from some testig I was doing at some point and have just forgoten. To start I only have 2 nics in play for the unraid system itself and the Intel nic is what I consider to the primary vs the realtek. The default route should be via the 10.71.74 network, but is not. The routhing table and the network.cfg don't seem to jive with each other. The network.cfg (below) seems to identify 3 sysnics but as I pointed out there is only 2. Is there an easy way to get things to rights again?
  3. ok. The OS uses the "default" route? If so how does one change that default route? Thanks for the quick reply.
  4. This pluggin is not using the eth0 but (main system) rather eth1 which is not the segment I want it using. How do I chage that, as I do not see that in the gui. UnRaid v6.9.1
  5. I implemented the changes suggested by bonienl, as they were the least invasive to start with, and removing the gateway 10.71.72.1 resolved the internet connectivity issue. As a second I also changed subnet of the secondary interface so things were not overlapping. Thanks for the quick feedback.
  6. Seems I have had a similar issue as in a post from : in this thread he found that in his network.cfg he had a duplication with the settings, and bonienl had him go in and edit the file to remove the nic that was not supposed to be there. I believe I have a similar situation, where I have a nic listed that is not presented (eth1), and I can remove that entry and see what happens. But in that post bonienl also said to change the sysnics= What is that parameter setting? I am sure I am going to have to adjust it. I have included a copy of my network.cfg below: # Generated settings: IFNAME[0]="br0" DHCP_KEEPRESOLV="yes" DNS_SERVER1="10.71.74.1" DNS_SERVER2="8.8.8.8" DNS_SERVER3="9.9.9.9" DHCP6_KEEPRESOLV="no" BRNAME[0]="br0" BRNICS[0]="eth0" BRSTP[0]="no" BRFD[0]="0" DESCRIPTION[0]="Primary Intel Nic " PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="10.71.74.23" NETMASK[0]="255.255.255.0" GATEWAY[0]="10.71.74.1" USE_DHCP6[0]="yes" IFNAME[1]="br1" BRNAME[1]="br1" BRNICS[1]="eth1" BRSTP[1]="no" BRFD[1]="0" DESCRIPTION[1]="Secondary interface Intel (Blue)" USE_DHCP[1]="no" IPADDR[1]="10.71.72.24" NETMASK[1]="255.255.255.0" IFNAME[2]="br2" BRNAME[2]="br2" BRSTP[2]="no" BRFD[2]="0" BRNICS[2]="eth2" DESCRIPTION[2]="OnBoard Realtek Nic" PROTOCOL[2]="ipv4" USE_DHCP[2]="no" IPADDR[2]="10.71.72.6" NETMASK[2]="255.255.255.248" GATEWAY[2]="10.71.72.1" SYSNICS="3"
  7. I have two nic's enabled for Unraid, one onboard and one addon intel nic. I also have another intel nic with two interfaces, which I have excluded from Unraid and is passed through to a PFSense VM. The two interfaces for Unraid are eth0 and eth2, currently as per the network settings page. But for some reason an old interface is still in the network.cfg file "eth1" with br1. since this is not a valid interface can I safely remove the associated lines from the cfg file? (see attached). Also how do I set the "default route", I assume it should point to the gateway on eth0, which is the primary interface Unraid, but for some reason it was pointed to gateway on eth2. I deleted the default route and I was then able update dockers etc again. network.cfg
  8. Using the terminal I used the command sudo ip link and found the br1 listed. I then ran sudo ip link del name br1 which seems to have removed the route from the routing table. Question now is, have I followed the right path or have I just created a trouble for myself?
  9. I have two nic's installed for use with Uraid (6.6.6). the nic's are designate eth0 (Primary Unraid access) and eth2 (Optional) bridge member for br0 is etho bridge member for br2 is eth2 In the routing table (attached), there is an entry for br1 which I do believe is conflicting with the entry for br2, but I am not able to delete the br1 entry. I have a small management subnet for my switches and AP's, I would like to use the Unifi docker to manage the AP's, but am not able to get it to communicate over br2 with a static ip in the supported range. I have even tried assigning the br2 to one of the virtual PC's and seeing if I could ping the other devices, but was not successful It is possible that I created the custom br1 sometime ago for use on the some virtual machines, but can not remember just how or where I went to do that. I might have edited a cfg file via the terminal, but not sure. I would greatly appreciate any help in straightening things out.
  10. ... regarding a saved backup job, obviously its first run would be a full, but is each scheduled run after that an incremental type backup or is it running a full backup each time?
  11. I believe I solved the problem. I booted with the PCIe ACS Override setting enabled and issue went away. Is there another way I could have handled this?
  12. I am passing an Intel NIC [8086:10d3] 06:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection through to a VM, but when I try to start the VM I get the following error, not sure what I'm doing wrong. Any assistance would be appreciated: internal error: process exited while connecting to monitor: 2017-10-17T21:31:14.632470Z qemu-system-x86_64: -device vfio-pci,host=06:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: error, group 13 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.2017-10-17T21:31:14.632496Z qemu-system-x86_64: -device vfio-pci,host=06:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: failed to get group 132017-10-17T21:31:14.632515Z qemu-system-x86_64: -device vfio-pci,host=06:00.0,id=hostdev0,bus=pci.2,addr=0x4: Device initialization failed
  13. Appreciate all the good feed back, it was very informative !