DrDirtyDevil

Members
  • Posts

    38
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DrDirtyDevil's Achievements

Noob

Noob (1/14)

1

Reputation

  1. For anyone in the future in a similar situation... I forgot that a long time ago I added a static route directly into unraid that points Site A's subnet to gateway 192.168.35.1 But with the migration from the UDM pro to pfSense I set pfSense on 192.168.35.3 so that I could run both gateways simultaneously. So after the migration when I removed the UDM pro from the setup the traffic meant for Site A went to 192.168.35.1 which at that point didn't exist anymore. I think above issue together with the host access Mac issues described in the post above and here: I think it was a combination of both these issues.
  2. Hi Everyone, I am completely out of ideas very frustrated and tired as well... On Friday 19th of august I installed a pfsense server in my network to replace my UDM pro, everything went fine as far as I could tell at that time. I configured 2 site2site ipsec connections to 2 UDM pro's in other locations, the same configuration as before with the 3x UDM pro setup but now with 2x UDM and 1x pfSense. To make my story a bit more understandable: 1 Site = DC 1 Site = Home 1 Site = Friends Place Site DC: 1 unraid server ( Dell R730XD ) running both docker and vm's with Intel Corporation Ethernet Controller X710 for 10GbE SFP+ but only copper rj45 in use at the moment 1 pfsense router 2 switches 1 windows server some miscellaneous stuff all 3 of them connected together with site2site vpn. Everything was working fine immediately with the exception of my Unraid machine, for some reason when ever I started the array I would lose access to the webinterface from home ( 1 site of the site2site connection ) I had still access to IPMI ( iDrac ) and from within a windows machine also in DC site I was able to get to the web interface. I was however able to reach all other devices with web interfaces from home from DC site so vpn was not the issue. While troubleshooting this weird problem I figured out and tried a few things - Discovered that pfsense logs told me that 192.168.35.5 ( IP of unraid box internally ) was changing Mac address every minute or so - Disabled Host access in docker because my problem appears to be very similar to this: - Disabled Both Bridging and bonding on the Unraids box Interface - Discovered that me being unable to reach the webui was not the only problem since Tautulli also spitting out notifications that the Plex server is unstable ( saying UP and Down every few minutes ) and playing movies is also unstable - I tried both static and DHCP Ip allocation of Unraid machine same issues remain - Downgraded to unraid 6.9.2 from 6.10.3 - removed all Br0 IP's previously used by a container all docker containers now use either host of a custom docker network - renamed network.cfg and network-rules.cfg there is definitely some troubleshooting steps I did that I have forgot to mention since I have been on this issue for over a week now with no luck. all dockers with a web interface routed through a Cloudflare tunnel and proxy are reachable in a nutshell i experience the following issues: - Plex unstable - Some containers seem not to be able to see each other for example sync-thing is constantly trying to see other machines but is only rarely able to set a connection although all endpoints are either inside of the Lan of DC or reachable over Site2Site VPN From Home: - Unraid Webui unreachable as soon as I start the array - other docker containers through Ip unreachable such as sync-thing - icmp ping works if array is stopped but switches to unreachable after array start I hope someone has some ideas left.
  3. i have updated to 6.10 rc 2 as per suggestion and my server now has a uptime of 6 days and counting where as it crashed every 24 hours before. knock on wood....
  4. RC means Release Client right ? so you are refering to the new Unraid 6.10 RC ? Do you mean i need to update my machine to fix this ? i have seen a few posts saying otherwise saying 6.10 doesn't fix the issue ?
  5. Tagging along the ride here, i have disabled host access as well for testing purposes.
  6. i have the same issue over and over again and i cant find the culprit..... i have ubiquiti switches and i cant find the flow control settings on there in the first place.... all the other suggestions are not applicable in my situation my LAN subnet is an completely different subnet as the default docker network ones so that also is unlikely any suggestions ?
  7. is it me or does the "category" path field not save ? i changed it multiple times no but it does not appear to save the path.
  8. Hey! I need some help with the assetto corsa docker container, i have enabled assetto corsa server manager and everything is working fine, except for images of any kind as far as i can see. it doesn't load the directory of the image for example the map.png on the live timing dashboard doesn't load for modded maps aswell as official tracks and the weather icons doen't work either. i have configured a br0 network interface with its own IP. Can anyone shine some light on this ?
  9. Netbox Container I managed to get the container running and i managed to create a user but after logging in i get the following error: Webinterface: <class 'redis.exceptions.ConnectionError'> Error 99 connecting to localhost:6379. Cannot assign requested address. Python version: 3.10.0 NetBox version: 3.0.11-dev Container logs: /usr/local/lib/python3.10/site-packages/django/views/debug.py:420: ExceptionCycleWarning: Cycle in the exception chain detected: exception 'Error 99 connecting to localhost:6379. Cannot assign requested address.' encountered again. Any Thoughts?
  10. Okay, so long week has passed and here's what changed and what i tried. i have upgraded the gpu to a 1050ti that a friend had lying around, for an upgrade and as an troubleshooting step. i have since dumped that vbios with gpu-z and changed my vm template to use that card and its rom. the issue persisted so i started googling again. i found this: https://forums.unraid.net/topic/71371-resolved-primary-gpu-passthrough/ in the first post the answer has been wrote done at the bottum and i tried the 3 commands he suggested on the unraid command line. the results are that this particular line of error has gone away: 2021-09-06T16:59:06.742810Z qemu-system-x86_64: -device vfio-pci,host=0000:01:00.0,id=hostdev0,bus=pci.3,addr=0x0,romfile=/mnt/nvme/isos/nietaankomen/nvidiagt1030.rom: Failed to mmap 0000:01:00.0 BAR 1. Performance may be slow but the 2 other lines still appear 2021-09-12T15:26:12.666245Z qemu-system-x86_64: vfio_err_notifier_handler(0000:03:00.0) Unrecoverable error detected. Please collect any data possible and then kill the guest 2021-09-12T15:26:13.917587Z qemu-system-x86_64: vfio_err_notifier_handler(0000:04:00.0) Unrecoverable error detected. Please collect any data possible and then kill the guest now i did change the XML with the one you provided but it still doesn't work. i am now trying to connect a monitor to the igpu and i am going to check for a bios update. i will check back in later. Update: i tried connecting a external monitor and i updated the bios but it does not appear to work, i also checked the pcie ports but i only have 2 and the secondary one is populated by an Sata host bus adapter. this is the motherboard i am using https://rog.asus.com/motherboards/rog-strix/rog-strix-b560-f-gaming-wifi-model/
  11. Thanks, I will try this tomorrow, thanks so much for your help !
  12. Ghost, thanks for you support! the xml part is way out of my comfort zone, first time even touching it. could you please assist with this part. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>X-Wing-Starfighter</name> <uuid>UUID</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='7'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='8'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='9'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='10'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/3e8e1afa-5c43-d965-af87-d937d10f00b3_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:4e:b7:d9'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/nietaankomen/nvidiagt1030.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0b05'/> <product id='0x19af'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> </domain> i will try all the other things you suggested. thanks again!
  13. what i also dont understand is that it appears that during unraid boot the GPU seems to use the dedicated graphics anyway i can see the bootscreen. but i changed the primary display option in the bios to cpu only. there is no cable in the motherboard display outputs. only a hdmi to the dedicated GPU.