Jump to content

Defylimits

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Defylimits

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks! I hadn't realise that you could reorder the interfaces. So just an update for you, eth0 - 10GBe Network Card ---->> br0 (Dockers and VMs) eth1 - 1 GBe Onboard (No IP Address) eth2 - 1GBe Onboard (Port Down) So that's how I've got it working so far and changed my strategy. Now I want to have a second bridge conenction linked to eth1, so that I can assign some dockers just to use that port. However whenever I follow the setup in your guide below my docker image becomes orphaned. As soon as the docker is started with this - "Modify any Docker via the WebUI in Advanced mode Set Network to None Remove any port mappings Fill in the Extra Parameters with: --network docker1 Apply and start the docker The docker is assigned an IP from the pool 10.0.3.128 - 10.0.3.254; typically the first docker gets the first IP address" Not sure what the cause of this would be?
  2. That's great thanks, how do I also access the network shares through this connection as well? I mean to be honest I'd prefer just to run everything off the 10Gbe connection if possible? Maybe with the 1gig connection for management access?
  3. Hey eveyone, just wondering if someone can help me out as bit green with networking in general and on unraid. Just added a new network card into my server (10gbe as it was cheap) and wondering how to configure my dockers and VM's to use it. Currently I have eth0 - 10.10.10.208 - Motherboard port 1 - Unraid host connection, only connection I've been using up until now eth1 - Port Down - Motherboard port 2 eth2- 10.10.10.210 - Asus 10gbe network card - Connection I want to use for all dockers and VMs. I disabled bridging of eth0, and enabled bridging of eth2, and custom br2 was created. Reassigning the VMs to this bridge, with their IP addresses set to 10.10.10.211 and 10.10.10.213 appears to have got these to work. Now on to the dockers, that are runing on the Host network and so use the 10.10.10.208 address, however I want these to use the 10.10.10.210 address, how do I point these in this direction and thus use the 10gbe port? If I bridge the docker then I get something like this "172.17.0.3:9987/UDP --> 10.10.10.208:9987" (teamspeak), and if I use br2 like the VMs I get "10.10.10.209:80/TCP --> 10.10.10.209:80" (webservice). I feel like I need to create a seperate Host2 or Bridge2 network type that I can use to select when trying to assign a Docker, but am unsure how to go about this, anyone got any pointers?
  4. I have a similar problem, the P5000 and P4000 (one in my server) are supported by this driver version (see snip)? My 710GT and 1080 both showed up in the Info page, and the 1080 disappeared when it was being used as pass through for my VM, but the Quadro P4000 does not show up on the Info page even tho it is showing up in the system devices page. Is there a way to get the UUID in a another manor and still try passing it through to the docker? Or will it not work at all if its not detected on the Unraid Nvidia page?
  5. Right so a bit of an update, finally moved into a new house only after 9 months of waiting *sigh* Anyways I found that changing the Video cards to EFI mode from legacy allowed the machine to boot and all the cards show up in Unraid and have even pass through the 1080 I was planning for the machine. Score! However I don't seem to get Unraid on any of the display outputs that are linked with the machine. Does anyone have an idea why that might be?
  6. Basically yes, I've tried most posts talk about putting two of the same cards in sadly. What would be cool is if Unraid can reenable the Pci slot when booted, but I'm guessing that is going to be a long shot.
  7. Good point, just upgraded from v3.5 to v3.95 Still the same issue, and had a heart stopper at one point thinking I'd bricked it. That moment when you start it up again and nothing displays for ages!!! Thanks tho
  8. So as my signature says I have a HP z820 workstation, and was looking to install additional graphics cards in it. Got a 1080 I want to put in and assign to a VM and a quadro 4000 I was to use for transcoding if I can get away with the two in my power budget. Got a GT 710 that am using for the unraid graphics card btw. Problem is that the HP workstation throws an error when there are more than one card is installed and then wont boot. I can get it to boot by disabling the PCI slot that the other cards are in. Would there be a way in Unraid to then re-enable these once the system has booted? Thanks in advance for any help / suggestions
  9. Hey thanks, I never got to try this because I had installed another VM, but if I move them disk again (more than likely) will try this.
  10. So further update on playing around with this, I did end up installing another windows 10 VM, works fine, until you unassign the disk in the configuration and then reassign it. DIdn't even move the Vdisk location. Anyone got any idea's?
  11. So I tried assigning it to a newly created VM and still get the same issue, but thanks for the tip!
  12. Also just as an FYI i'm on Unraid version 6.7.2
  13. New drive is also through unassigned devices. This is my VM XMLserenityxml.txt TBH I also seem to have an issue with the disks showing up as 246G when it was assigned as 400G previously. Trying to change it I get the sucessful message displayed at the bottom but the value always reverts to 246G.
  14. Hi there guys, Wondering if someone can help me, got a Windows 10 VM that is refusing to boot. I have recently moved the VM disk from one unassigned device (old 1TB spinner) to a new one (500GB SSD), using the Dolphin app to copy across the file. Then I've changed the location of the machines disk to point to the new vdisk location. Tried to boot up and get the attached boot screen image which has the statement "Press ESC in x seconds to skip startup.nsh or any other key to continue". Can get into the bios menu but there appears to be no disk located in there and still doesn't boot when I exit the BIOS. Tried to re-point the VM back to the original disk location and get the same error. Wondering what I can do to get it to work, any idea's?
  15. 56GB of ECC RAM, as one of my 8GB sticks died. I did have up to 128gb when I was running an VMware system in a HP DL360, but that as too noisey for home usage when I had to move it out of the Data centre it was in. Also when I migrated to UnRaid and never looked back!