Defylimits

Members
  • Posts

    26
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Defylimits's Achievements

Noob

Noob (1/14)

0

Reputation

1

Community Answers

  1. Well I had the same issue, thanks Celsian, Have to agree with Meep tho, very counter intuitive, been bugging me since uprgraded to 6.12.2. Weirdly it didn't get resolved when I tried to downgrade!
  2. This Solution from Celsian worked - Thread - https://forums.unraid.net/topic/140858-vms-on-br0-cannot-access-the-internet-after-6120/#comment-1283343
  3. Ok bit of an update, I reverted to 6.12.1 but appears to have made no difference However I did notice my Windows VM can access docker webpages that are either on my custom docker network "dockernet" and dockers that are assigned to the "host", but not any of the dockers on br0, so there appears to be something blocking the VMs from communicating with the main network. So I'm going to guess its something to do with the routing table, but not sure how to fix that tbh.
  4. So I have 2 VMs I run constantly, my HAOS and a Windows 10 VM. They have both recently lost connection to my network, and it seems to be after I did an update 6.12.2. I do have VLANs running, a Server VLan (100) 10.10.10.0/24 and an IoT Vlan (20) 192.168.20.0/24. With both being on the 100 Vlan, and the HAOS having a second connection to the IoT VLan. The Unraid interface is on the 100 Vlan as well and is the default Vlan for the port the server is plugged into. I have tried updating the Network Source, from br0.100 to br0 and even virbr0 but no connection. Have also tried chaning the Network model to virtio-net from Virtio io but not luck then either. Weirdly I can access either of the IP's and the Window VM shows no connection, however in my Ubiquiti router I see all 3 connections and fixed IP's from the 2 VMs even after refreshing the MAC addresses from the Network connections as well. Am really confused, all the network connections for Dockers are working, some of which are host, others bridged and others on custom network. Not sure what the problem is or what to try, anyone have any idea? Diagnostics attached btw. tower-diagnostics-20230706-1638.zip
  5. Idiot, didn't Stub the graphics! Tools >> System devices - Select IOMMU Group with Graphics card in and scroll to the bottom clicking "Bind Selected VFIO At Boot"
  6. Looking at the Syslog I have these errors around the time having the issue? Am I missing something?
  7. So I've got an interesting issue, if I assign a graphics card and then start the VM, my VM Manager seems to crash. When I click on the VMs tab or on VM Manager under settings both pages from the Unraid GUI it will not load, I also loose access to the Dashboard page which will not load correctly. VM's already running still appear to be accessible when this is happening though and Dockers appear to be online? To get the system back working I need to reboot, but I can't do this from the graphics and have to go power off the server physically and then restart it that way. I've got three graphics cards in my machine, a GT710 for Unraid, a Quadro P600 for Plex transcoding and the 1070 I'm looking at passing through to my VM. I tried this on the Windows VM I had, just by changing from VNC graphics to the 1070 and experienced this crash a number of times, reseting back to VNC graphics and everything works fine. Looking something up I found it might be an issue running the OVMF BIOS, so I setup another Ubuntu machine with SeaBios, but just had the same issue. Wondering if anyone has any ideas? Btw hello again everyone, last time I was in the forum was back in March 2020, pandemic kept me away from home for some time so I didn't have a chance to fiddle and bugger anything else up until now!
  8. Thanks! I hadn't realise that you could reorder the interfaces. So just an update for you, eth0 - 10GBe Network Card ---->> br0 (Dockers and VMs) eth1 - 1 GBe Onboard (No IP Address) eth2 - 1GBe Onboard (Port Down) So that's how I've got it working so far and changed my strategy. Now I want to have a second bridge conenction linked to eth1, so that I can assign some dockers just to use that port. However whenever I follow the setup in your guide below my docker image becomes orphaned. As soon as the docker is started with this - "Modify any Docker via the WebUI in Advanced mode Set Network to None Remove any port mappings Fill in the Extra Parameters with: --network docker1 Apply and start the docker The docker is assigned an IP from the pool 10.0.3.128 - 10.0.3.254; typically the first docker gets the first IP address" Not sure what the cause of this would be?
  9. That's great thanks, how do I also access the network shares through this connection as well? I mean to be honest I'd prefer just to run everything off the 10Gbe connection if possible? Maybe with the 1gig connection for management access?
  10. Hey eveyone, just wondering if someone can help me out as bit green with networking in general and on unraid. Just added a new network card into my server (10gbe as it was cheap) and wondering how to configure my dockers and VM's to use it. Currently I have eth0 - 10.10.10.208 - Motherboard port 1 - Unraid host connection, only connection I've been using up until now eth1 - Port Down - Motherboard port 2 eth2- 10.10.10.210 - Asus 10gbe network card - Connection I want to use for all dockers and VMs. I disabled bridging of eth0, and enabled bridging of eth2, and custom br2 was created. Reassigning the VMs to this bridge, with their IP addresses set to 10.10.10.211 and 10.10.10.213 appears to have got these to work. Now on to the dockers, that are runing on the Host network and so use the 10.10.10.208 address, however I want these to use the 10.10.10.210 address, how do I point these in this direction and thus use the 10gbe port? If I bridge the docker then I get something like this "172.17.0.3:9987/UDP --> 10.10.10.208:9987" (teamspeak), and if I use br2 like the VMs I get "10.10.10.209:80/TCP --> 10.10.10.209:80" (webservice). I feel like I need to create a seperate Host2 or Bridge2 network type that I can use to select when trying to assign a Docker, but am unsure how to go about this, anyone got any pointers?
  11. I have a similar problem, the P5000 and P4000 (one in my server) are supported by this driver version (see snip)? My 710GT and 1080 both showed up in the Info page, and the 1080 disappeared when it was being used as pass through for my VM, but the Quadro P4000 does not show up on the Info page even tho it is showing up in the system devices page. Is there a way to get the UUID in a another manor and still try passing it through to the docker? Or will it not work at all if its not detected on the Unraid Nvidia page?
  12. Right so a bit of an update, finally moved into a new house only after 9 months of waiting *sigh* Anyways I found that changing the Video cards to EFI mode from legacy allowed the machine to boot and all the cards show up in Unraid and have even pass through the 1080 I was planning for the machine. Score! However I don't seem to get Unraid on any of the display outputs that are linked with the machine. Does anyone have an idea why that might be?
  13. Basically yes, I've tried most posts talk about putting two of the same cards in sadly. What would be cool is if Unraid can reenable the Pci slot when booted, but I'm guessing that is going to be a long shot.
  14. Good point, just upgraded from v3.5 to v3.95 Still the same issue, and had a heart stopper at one point thinking I'd bricked it. That moment when you start it up again and nothing displays for ages!!! Thanks tho
  15. So as my signature says I have a HP z820 workstation, and was looking to install additional graphics cards in it. Got a 1080 I want to put in and assign to a VM and a quadro 4000 I was to use for transcoding if I can get away with the two in my power budget. Got a GT 710 that am using for the unraid graphics card btw. Problem is that the HP workstation throws an error when there are more than one card is installed and then wont boot. I can get it to boot by disabling the PCI slot that the other cards are in. Would there be a way in Unraid to then re-enable these once the system has booted? Thanks in advance for any help / suggestions