Jump to content

AboveUnrefined

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by AboveUnrefined

  1. Hello!

     

    I'm trying to figure out how I can reliably passthrough the four NIC interfaces that I have with an Intel 82576 (rev 01) card that I have.

     

    I have everything set up with this card for use with a pfSense VM that I created, it was not difficult - everything seemed fine with the IOMMU groups and I was able to easily select each interface for passthrough to the VM:

    image.thumb.png.37d61a3e1c7dd90617712a1b30688f39.png

    image.thumb.png.50e7fffa4d4ef2c842ae895d624fc1a5.png

    The problem that I've been experiencing is that I've noticed how if I restart the server altogether, the physical ports seem to change around when pfSense starts back up... sometimes they'll even swap around when I restart the VM, but it seems to happen less often if I do a VM restart.

     

    The other thing I've noticed is that I'm only seeing 3 of the 4 interfaces show up within pfSense... something just doesn't seem right with how the xml is being configured but I could be wrong there... I am using an older HP z230 workstation with a Xeon E3-1200 v3 Processor where I'm pretty sure I set all the virtualization parameters correctly...

     

    What I've been trying to do is muck around with the XML for this, trying to figure how I might change the `hostdev` elements but I've not been very successful, so far I've tried changing around the bus and function attributes for the address elements to match more closely to how it's coming down the pipe, but all I think I might have done is made it so 2 of the 4 ports on the card remain static since it seems to have survived a few reboots at this point without having to change anything around:

     

        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> <!-- I changed this to be `bus=0x01` and `function=0x1` -->
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> <!-- I changed this to be `bus=0x04` and `function=0x1` -->
        </hostdev>

     

     

    My question is if someone with better knowledge for this might know what else could be going on or how to make it so I can properly pass through all 4 nics from the card to the VM? I don't think I'm going down the right path with editing the XML like this...

  2. On 11/1/2021 at 6:56 PM, huskycdn said:

    I've been having this issue for the past year currently (Version: 6.9.2) was using chrome switched to firefox. I just reboot the server, happens every other month or so. Would be nice to hear an official reply from limetech

     

    I agree. I've seen this happening for who knows how long... I've got life to deal with and then this crap crops up. Thank goodness it's not stopping anything right now but it'd be nice to know that this won't end up being a bigger problem than "I can't access the web terminal".

  3. On 2/10/2020 at 10:27 PM, dlandon said:

    Yes.  It was never intended for UD to support VM disk images.  VM images should be on the cache or an array disk.  UD happens to handle mounting and supporting a VM image disk to a point.  Spin down control is fixed and disk monitoring is non-existent.

     

    I know the arguments about how the cache and array don't do what you want right now.  Eventually Unraid will have multiple cache device capability that will let you do what you are doing with UD.

    Yes, and I don't intend to change it.

     

    Post a feature request for LT to add the multiple cache device capability.

    Alright, thanks for your answer -- I'll likely restructure my array to have cache disks instead and put the VM images on those.

  4. On 12/27/2019 at 6:29 AM, dlandon said:

    All disks mounted by UD will have a spin down time set at 30 minutes,  This cannot be changed.

    Is this going to be sticking around? I'm hitting a problem where I have a VM disk image on a drive that gets spun down, and while it is spun down and I try to start the VM I have problems... It's not the end of the world but it is a sort of inconvenience that I am hitting that causes me to have to forcefully shut down the machine once it's stuck and restart it.

    The drive is a SSD and I don't really need this to be spun down, or any of the drives under the UD domain to be spun down... While trying to figure this out I was starting to assume that this was at least following the "Default spin down delay" setting in the Disk Settings of Unraid but now that I just found this post I'm supposing it's a hard 30 minute spin down...

  5. On 5/14/2018 at 8:09 PM, BobPhoenix said:

    Did you make a directory inside your VM called /home/daylend/UbuntuServer and then reboot or mount -a to reload fstab?

     

    I did the following:

    512466644_9PSettings.png.72faeb08b80078882d237a8f711f7114.png

     

    Then I created the Directory "/unraid" not that the exact path matters could have used "/mnt/unraid" for instance and used that below instead of /unraid.

     

    Then I added the following to fstab and rebooted:

    unraid        /unraid            9p         trans=virtio,version=9p2000.L,_netdev,rw 0 0

     

    And I got my user shares from my unRAID server listed when I "ls /unraid"

     

    Hope this helps.

     

    Hey man, thanks for this; it worked for a new Arch VM I'm setting up right now, worked perfectly for RW

     

    To clarify, what's posted by BobPhoenix is to add the following in your fstab:

     

    `tag    /mountpoint    9p  trans=virtio,version=9p2000.L,_netdev,rw 0 0`

     

    you may also make sure that you have the following:

    /etc/mkinitcpio.conf

    MODULES=(virtio virtio_blk virtio_pci virtio_net)

  6. On 3/5/2019 at 4:03 AM, knex666 said:

    I had kind of the same issue not "updated" but it always says "update ready" that was because I changed things inside the image. So if you exec -it into the container you should see that "update ready" message. Did you do that?

    the containers that would have an issue with this weren't altered like that, I think there's a bug either with Docker or the way it's set up in Unraid. Diagnosing it is a pain right now and I found an interim solution that works alright, so I'm ok with it for now.

     

    On 3/4/2019 at 6:50 AM, Lasbo55 said:

    I'm also experiencing this with home-assistant. Removing and re-adding the container seemed to work but a few days later it regressed to telling me auto update updated it but still saying "update ready". I've also had a few situations where the docker just disappears from the UI and is inaccessible, seemingly randomly and not just when it purports to do an update. Very annoying with a home automation server that suddenly is no longer there to turn the lights on/off like you were expecting! I've found no solution that sticks so for now have removed it from auto-updating to see if that helps at least the docker staying up and running.

    I've experienced this as well and do not utilize the auto updating features because of this sort of thing. At least it is easy to spawn back if things get really messed up like how you're describing with orphan images.

  7. I don't think it did update, it did _something_ but then just re-spawned at the same version as it was before. This issue is happening across a couple different containers sporadically too - today home-assistant/home-assistant is having this issue when it hasn't previously... I've noticed that the issue might sort itself out later on sometime... I just wish I'd know why this might happen.

     

    In the meantime, is there any good way to manually force the update to happen? I'm guessing it might involve removing the container and re-adding it... would the template I used for each container stay the way I had it if I was to do this?

     

    EDIT: I just grew a set and removed the container and image then used the template that I had for that container to re-add it. It STILL showed "update ready" but then when I clicked on it and performed the update, it finally went ahead and marked it as updated. That worked to solve this situation. That's what I'll do from now on if containers decide to not update for some reason...

  8. Here's an example of what I'm talking about:

     

    In this example, I see that Gitlab-CE has "update ready" so I go ahead and click it, this is what I see:

    image.thumb.png.bd3f5f24cbe5b8cc19e3bada713f94cb.png

     

    I go ahead and click done. This is what I see next, even after I reload:

    image.thumb.png.c0c65050dfc7a21474c5cb05410dc492.png

     

    I guess it updated? Or it didn't? Couldn't really say... I'm supposing not since it's still saying "update ready".

  9. 1 hour ago, bonienl said:

    Check your Internet connection, in particular your DNS settings?

     

    Do you use pi-hole?  If so make sure your Unraid server is set to use a different DNS server and not pi-hole

    Internet and DNS are completely fine (at least by what I can tell), I do not use pi-hole. I can download the updates and they do seem to install through the interface. The "update ready" message persists.

  10. Hello,

     

    I've been searching around for a reason to why this may be happening, but can't find anything clear or I could be misunderstanding what's already posted around this issue I'm experiencing:

     

    I have a bunch of containers reporting "update ready" so I go ahead and update them. It will go through the update process and then when they finish, it still reports "update ready". I've had this problem for a while now and I've looked here and there, just assumed it might be fixed with upcoming releases.

     

    The problem too is that I've recently updated to 6.7.0-rc2 and it seemed like the problem was fixed, containers would update and then would report that they were updated, no update ready messages. Now, it is back to doing the same old behavior again showing that "update ready" all the time.

     

    Does anybody know why this would happen?

     

    Thanks

  11. I'm having problems with this as well; I can confirm br0 is set right and:
     

    root@Tower:~# ps aux | grep libvirtwol
    root     19502  0.0  0.0   9812  2136 pts/2    S+   19:33   0:00 grep libvirtwol
    root     22023  0.0  0.0 143204 22008 ?        S    19:15   0:00 /usr/bin/python /usr/local/emhttp/plugins/libvirtwol/scripts/libvirtwol.py br0

    is the output from checking what's running while the plugin is enabled.

     

    I've never been able to get this plugin to work for a couple years now, it hasn't been a big deal until now where I want to add some automations with something else sending a magic packet to turn on some VMs. I'm interested in trying any other alternative as well, given the situation.

  12. Thanks for the reply. I went ahead and tried the binhex setup and that worked. If that doesn't I'll try what you suggest. I think it might see that other subnet because of how I have a bridged interface (composed of 2 nics) and that might be part of the issue - the 192.168.45.0/24 subnet is what it should be, but I'm not totally aware of how this is working either so I'm not sure...

  13. Hello!

     

    I'm trying to get the qbittorrent container going but I'm having issues. I'm hoping someone can help me out!

     

    I keep getting this issue while it's starting up:
     

    Error: Nexthop has invalid gateway.

     

    This is from the docker log, everything in the qBittorent log looks fine (no errors or any red flags) I can include more if needed:

     

    Fri Dec 14 01:24:57 2018 [VPN] Peer Connection Initiated with [AF_INET]185.80.222.63:443
    Fri Dec 14 01:24:58 2018 TUN/TAP device tun0 opened
    Fri Dec 14 01:24:58 2018 do_ifconfig, tt->did_ifconfig_ipv6_setup=0
    Fri Dec 14 01:24:58 2018 /sbin/ip link set dev tun0 up mtu 1500
    Fri Dec 14 01:24:58 2018 /sbin/ip addr add dev tun0 local 10.10.8.110 peer 10.10.8.109
    Fri Dec 14 01:24:58 2018 Initialization Sequence Completed
    2018-12-14 01:24:58.890754 [info] WebUI port defined as 8082
    2018-12-14 01:24:58.925592 [info] Adding 192.168.45.0/24 as route via docker eth0
    Error: Nexthop has invalid gateway.
    2018-12-14 01:24:58.958437 [info] ip route defined as follows...
    --------------------
    default via 10.10.8.109 dev tun0
    10.10.8.1 via 10.10.8.109 dev tun0
    10.10.8.109 dev tun0 proto kernel scope link src 10.10.8.110
    172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.8
    185.80.222.63 via 172.17.0.1 dev eth0
    --------------------
    iptable_mangle 16384 1
    ip_tables 24576 3 iptable_filter,iptable_nat,iptable_mangle
    2018-12-14 01:24:58.994864 [info] iptable_mangle support detected, adding fwmark for tables
    2018-12-14 01:24:59.041228 [info] Docker network defined as 172.17.0.0/16
    2018-12-14 01:24:59.092341 [info] Incoming connections port defined as 8999
    2018-12-14 01:24:59.125368 [info] iptables defined as follows...
    --------------------
    -P INPUT DROP
    -P FORWARD ACCEPT
    -P OUTPUT DROP
    -A INPUT -i tun0 -j ACCEPT
    -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --sport 443 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 8082 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --sport 8082 -j ACCEPT
    -A INPUT -s 192.168.45.0/24 -i eth0 -p tcp -m tcp --dport 8999 -j ACCEPT
    -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
    -A INPUT -i lo -j ACCEPT
    -A OUTPUT -o tun0 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --dport 443 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --dport 8082 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 8082 -j ACCEPT
    -A OUTPUT -d 192.168.45.0/24 -o eth0 -p tcp -m tcp --sport 8999 -j ACCEPT
    -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
    -A OUTPUT -o lo -j ACCEPT
    --------------------
    Adding 100 group
    groupadd: GID '100' already exists
    Adding 1000 user
    2018-12-14 01:25:00.392667 [info] UMASK defined as '002'
    2018-12-14 01:25:00.432182 [info] Starting qBittorrent daemon...
    Logging to /config/qBittorrent/data/logs/qbittorrent-daemon.log.
    2018-12-14 01:25:01.468780 [info] qBittorrent PID: 213
    2018-12-14 01:25:01.472233 [info] Started qBittorrent daemon successfully...

     

    thanks for helping out and for all the great work!

  14. Hi everyone!

     

    Just wanted to say hello and thank everyone for sharing their workmanship on this product! This is such a well crafted product and works perfectly for the z820 I scalped with 128 gibson (sic) memory.

×
×
  • Create New...