AboveUnrefined

Members
  • Posts

    16
  • Joined

  • Last visited

AboveUnrefined's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hello! I'm trying to figure out how I can reliably passthrough the four NIC interfaces that I have with an Intel 82576 (rev 01) card that I have. I have everything set up with this card for use with a pfSense VM that I created, it was not difficult - everything seemed fine with the IOMMU groups and I was able to easily select each interface for passthrough to the VM: The problem that I've been experiencing is that I've noticed how if I restart the server altogether, the physical ports seem to change around when pfSense starts back up... sometimes they'll even swap around when I restart the VM, but it seems to happen less often if I do a VM restart. The other thing I've noticed is that I'm only seeing 3 of the 4 interfaces show up within pfSense... something just doesn't seem right with how the xml is being configured but I could be wrong there... I am using an older HP z230 workstation with a Xeon E3-1200 v3 Processor where I'm pretty sure I set all the virtualization parameters correctly... What I've been trying to do is muck around with the XML for this, trying to figure how I might change the `hostdev` elements but I've not been very successful, so far I've tried changing around the bus and function attributes for the address elements to match more closely to how it's coming down the pipe, but all I think I might have done is made it so 2 of the 4 ports on the card remain static since it seems to have survived a few reboots at this point without having to change anything around: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> <!-- I changed this to be `bus=0x01` and `function=0x1` --> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> <!-- I changed this to be `bus=0x04` and `function=0x1` --> </hostdev> My question is if someone with better knowledge for this might know what else could be going on or how to make it so I can properly pass through all 4 nics from the card to the VM? I don't think I'm going down the right path with editing the XML like this...
  2. I agree. I've seen this happening for who knows how long... I've got life to deal with and then this crap crops up. Thank goodness it's not stopping anything right now but it'd be nice to know that this won't end up being a bigger problem than "I can't access the web terminal".
  3. Alright, thanks for your answer -- I'll likely restructure my array to have cache disks instead and put the VM images on those.
  4. Is this going to be sticking around? I'm hitting a problem where I have a VM disk image on a drive that gets spun down, and while it is spun down and I try to start the VM I have problems... It's not the end of the world but it is a sort of inconvenience that I am hitting that causes me to have to forcefully shut down the machine once it's stuck and restart it. The drive is a SSD and I don't really need this to be spun down, or any of the drives under the UD domain to be spun down... While trying to figure this out I was starting to assume that this was at least following the "Default spin down delay" setting in the Disk Settings of Unraid but now that I just found this post I'm supposing it's a hard 30 minute spin down...
  5. Hey man, thanks for this; it worked for a new Arch VM I'm setting up right now, worked perfectly for RW To clarify, what's posted by BobPhoenix is to add the following in your fstab: `tag /mountpoint 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0` you may also make sure that you have the following: /etc/mkinitcpio.conf MODULES=(virtio virtio_blk virtio_pci virtio_net)
  6. the containers that would have an issue with this weren't altered like that, I think there's a bug either with Docker or the way it's set up in Unraid. Diagnosing it is a pain right now and I found an interim solution that works alright, so I'm ok with it for now. I've experienced this as well and do not utilize the auto updating features because of this sort of thing. At least it is easy to spawn back if things get really messed up like how you're describing with orphan images.
  7. I don't think it did update, it did _something_ but then just re-spawned at the same version as it was before. This issue is happening across a couple different containers sporadically too - today home-assistant/home-assistant is having this issue when it hasn't previously... I've noticed that the issue might sort itself out later on sometime... I just wish I'd know why this might happen. In the meantime, is there any good way to manually force the update to happen? I'm guessing it might involve removing the container and re-adding it... would the template I used for each container stay the way I had it if I was to do this? EDIT: I just grew a set and removed the container and image then used the template that I had for that container to re-add it. It STILL showed "update ready" but then when I clicked on it and performed the update, it finally went ahead and marked it as updated. That worked to solve this situation. That's what I'll do from now on if containers decide to not update for some reason...
  8. Here's an example of what I'm talking about: In this example, I see that Gitlab-CE has "update ready" so I go ahead and click it, this is what I see: I go ahead and click done. This is what I see next, even after I reload: I guess it updated? Or it didn't? Couldn't really say... I'm supposing not since it's still saying "update ready".
  9. Internet and DNS are completely fine (at least by what I can tell), I do not use pi-hole. I can download the updates and they do seem to install through the interface. The "update ready" message persists.
  10. Hey, sorry for the delay. The problem had to do with network access in a virtual machine container -- it wasn't so much the network itself as the container's access to the network.
  11. Hello, I've been searching around for a reason to why this may be happening, but can't find anything clear or I could be misunderstanding what's already posted around this issue I'm experiencing: I have a bunch of containers reporting "update ready" so I go ahead and update them. It will go through the update process and then when they finish, it still reports "update ready". I've had this problem for a while now and I've looked here and there, just assumed it might be fixed with upcoming releases. The problem too is that I've recently updated to 6.7.0-rc2 and it seemed like the problem was fixed, containers would update and then would report that they were updated, no update ready messages. Now, it is back to doing the same old behavior again showing that "update ready" all the time. Does anybody know why this would happen? Thanks
  12. I figured out that the plugin worked fine, the problem was my network environment -- once I fixed what was blocking the magic packet everything does work fine. Thanks for the great plugin!
  13. I'm having problems with this as well; I can confirm br0 is set right and: root@Tower:~# ps aux | grep libvirtwol root 19502 0.0 0.0 9812 2136 pts/2 S+ 19:33 0:00 grep libvirtwol root 22023 0.0 0.0 143204 22008 ? S 19:15 0:00 /usr/bin/python /usr/local/emhttp/plugins/libvirtwol/scripts/libvirtwol.py br0 is the output from checking what's running while the plugin is enabled. I've never been able to get this plugin to work for a couple years now, it hasn't been a big deal until now where I want to add some automations with something else sending a magic packet to turn on some VMs. I'm interested in trying any other alternative as well, given the situation.
  14. Thanks for the reply. I went ahead and tried the binhex setup and that worked. If that doesn't I'll try what you suggest. I think it might see that other subnet because of how I have a bridged interface (composed of 2 nics) and that might be part of the issue - the 192.168.45.0/24 subnet is what it should be, but I'm not totally aware of how this is working either so I'm not sure...
  15. Hello! I'm trying to get the qbittorrent container going but I'm having issues. I'm hoping someone can help me out! I keep getting this issue while it's starting up: Error: Nexthop has invalid gateway. This is from the docker log, everything in the qBittorent log looks fine (no errors or any red flags) I can include more if needed: Fri Dec 14 01:24:57 2018 [VPN] Peer Connection Initiated with [AF_INET]185.80.222.63:443 Fri Dec 14 01:24:58 2018 TUN/TAP device tun0 opened Fri Dec 14 01:24:58 2018 do_ifconfig, tt->did_ifconfig_ipv6_setup=0 Fri Dec 14 01:24:58 2018 /sbin/ip link set dev tun0 up mtu 1500 Fri Dec 14 01:24:58 2018 /sbin/ip addr add dev tun0 local 10.10.8.110 peer 10.10.8.109 Fri Dec 14 01:24:58 2018 Initialization Sequence Completed 2018-12-14 01:24:58.890754 [info] WebUI port defined as 8082 2018-12-14 01:24:58.925592 [info] Adding 192.168.45.0/24 as route via docker eth0 Error: Nexthop has invalid gateway. 2018-12-14 01:24:58.958437 [info] ip route defined as follows... -------------------- default via 10.10.8.109 dev tun0 10.10.8.1 via 10.10.8.109 dev tun0 10.10.8.109 dev tun0 proto kernel scope link src 10.10.8.110 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.8 185.80.222.63 via 172.17.0.1 dev eth0 -------------------- iptable_mangle 16384 1 ip_tables 24576 3 iptable_filter,iptable_nat,iptable_mangle 2018-12-14 01:24:58.994864 [info] iptable_mangle support detected, adding fwmark for tables 2018-12-14 01:24:59.041228 [info] Docker network defined as 172.17.0.0/16 2018-12-14 01:24:59.092341 [info] Incoming connections port defined as 8999 2018-12-14 01:24:59.125368 [info] iptables defined as follows... -------------------- -P INPUT DROP -P FORWARD ACCEPT -P OUTPUT DROP -A INPUT -i tun0 -j ACCEPT -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --sport 443 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8082 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --sport 8082 -j ACCEPT -A INPUT -s 192.168.45.0/24 -i eth0 -p tcp -m tcp --dport 8999 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A OUTPUT -o tun0 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --dport 443 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --dport 8082 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8082 -j ACCEPT -A OUTPUT -d 192.168.45.0/24 -o eth0 -p tcp -m tcp --sport 8999 -j ACCEPT -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A OUTPUT -o lo -j ACCEPT -------------------- Adding 100 group groupadd: GID '100' already exists Adding 1000 user 2018-12-14 01:25:00.392667 [info] UMASK defined as '002' 2018-12-14 01:25:00.432182 [info] Starting qBittorrent daemon... Logging to /config/qBittorrent/data/logs/qbittorrent-daemon.log. 2018-12-14 01:25:01.468780 [info] qBittorrent PID: 213 2018-12-14 01:25:01.472233 [info] Started qBittorrent daemon successfully... thanks for helping out and for all the great work!