• Posts

  • Joined

  • Last visited

AboveUnrefined's Achievements


Newbie (1/14)



  1. Alright, thanks for your answer -- I'll likely restructure my array to have cache disks instead and put the VM images on those.
  2. Is this going to be sticking around? I'm hitting a problem where I have a VM disk image on a drive that gets spun down, and while it is spun down and I try to start the VM I have problems... It's not the end of the world but it is a sort of inconvenience that I am hitting that causes me to have to forcefully shut down the machine once it's stuck and restart it. The drive is a SSD and I don't really need this to be spun down, or any of the drives under the UD domain to be spun down... While trying to figure this out I was starting to assume that this was at least following the "Default spin down delay" setting in the Disk Settings of Unraid but now that I just found this post I'm supposing it's a hard 30 minute spin down...
  3. Hey man, thanks for this; it worked for a new Arch VM I'm setting up right now, worked perfectly for RW To clarify, what's posted by BobPhoenix is to add the following in your fstab: `tag /mountpoint 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0` you may also make sure that you have the following: /etc/mkinitcpio.conf MODULES=(virtio virtio_blk virtio_pci virtio_net)
  4. the containers that would have an issue with this weren't altered like that, I think there's a bug either with Docker or the way it's set up in Unraid. Diagnosing it is a pain right now and I found an interim solution that works alright, so I'm ok with it for now. I've experienced this as well and do not utilize the auto updating features because of this sort of thing. At least it is easy to spawn back if things get really messed up like how you're describing with orphan images.
  5. I don't think it did update, it did _something_ but then just re-spawned at the same version as it was before. This issue is happening across a couple different containers sporadically too - today home-assistant/home-assistant is having this issue when it hasn't previously... I've noticed that the issue might sort itself out later on sometime... I just wish I'd know why this might happen. In the meantime, is there any good way to manually force the update to happen? I'm guessing it might involve removing the container and re-adding it... would the template I used for each container stay the way I had it if I was to do this? EDIT: I just grew a set and removed the container and image then used the template that I had for that container to re-add it. It STILL showed "update ready" but then when I clicked on it and performed the update, it finally went ahead and marked it as updated. That worked to solve this situation. That's what I'll do from now on if containers decide to not update for some reason...
  6. Here's an example of what I'm talking about: In this example, I see that Gitlab-CE has "update ready" so I go ahead and click it, this is what I see: I go ahead and click done. This is what I see next, even after I reload: I guess it updated? Or it didn't? Couldn't really say... I'm supposing not since it's still saying "update ready".
  7. Internet and DNS are completely fine (at least by what I can tell), I do not use pi-hole. I can download the updates and they do seem to install through the interface. The "update ready" message persists.
  8. Hey, sorry for the delay. The problem had to do with network access in a virtual machine container -- it wasn't so much the network itself as the container's access to the network.
  9. Hello, I've been searching around for a reason to why this may be happening, but can't find anything clear or I could be misunderstanding what's already posted around this issue I'm experiencing: I have a bunch of containers reporting "update ready" so I go ahead and update them. It will go through the update process and then when they finish, it still reports "update ready". I've had this problem for a while now and I've looked here and there, just assumed it might be fixed with upcoming releases. The problem too is that I've recently updated to 6.7.0-rc2 and it seemed like the problem was fixed, containers would update and then would report that they were updated, no update ready messages. Now, it is back to doing the same old behavior again showing that "update ready" all the time. Does anybody know why this would happen? Thanks
  10. I figured out that the plugin worked fine, the problem was my network environment -- once I fixed what was blocking the magic packet everything does work fine. Thanks for the great plugin!
  11. I'm having problems with this as well; I can confirm br0 is set right and: root@Tower:~# ps aux | grep libvirtwol root 19502 0.0 0.0 9812 2136 pts/2 S+ 19:33 0:00 grep libvirtwol root 22023 0.0 0.0 143204 22008 ? S 19:15 0:00 /usr/bin/python /usr/local/emhttp/plugins/libvirtwol/scripts/libvirtwol.py br0 is the output from checking what's running while the plugin is enabled. I've never been able to get this plugin to work for a couple years now, it hasn't been a big deal until now where I want to add some automations with something else sending a magic packet to turn on some VMs. I'm interested in trying any other alternative as well, given the situation.
  12. Thanks for the reply. I went ahead and tried the binhex setup and that worked. If that doesn't I'll try what you suggest. I think it might see that other subnet because of how I have a bridged interface (composed of 2 nics) and that might be part of the issue - the subnet is what it should be, but I'm not totally aware of how this is working either so I'm not sure...
  13. Hello! I'm trying to get the qbittorrent container going but I'm having issues. I'm hoping someone can help me out! I keep getting this issue while it's starting up: Error: Nexthop has invalid gateway. This is from the docker log, everything in the qBittorent log looks fine (no errors or any red flags) I can include more if needed: Fri Dec 14 01:24:57 2018 [VPN] Peer Connection Initiated with [AF_INET] Fri Dec 14 01:24:58 2018 TUN/TAP device tun0 opened Fri Dec 14 01:24:58 2018 do_ifconfig, tt->did_ifconfig_ipv6_setup=0 Fri Dec 14 01:24:58 2018 /sbin/ip link set dev tun0 up mtu 1500 Fri Dec 14 01:24:58 2018 /sbin/ip addr add dev tun0 local peer Fri Dec 14 01:24:58 2018 Initialization Sequence Completed 2018-12-14 01:24:58.890754 [info] WebUI port defined as 8082 2018-12-14 01:24:58.925592 [info] Adding as route via docker eth0 Error: Nexthop has invalid gateway. 2018-12-14 01:24:58.958437 [info] ip route defined as follows... -------------------- default via dev tun0 via dev tun0 dev tun0 proto kernel scope link src dev eth0 proto kernel scope link src via dev eth0 -------------------- iptable_mangle 16384 1 ip_tables 24576 3 iptable_filter,iptable_nat,iptable_mangle 2018-12-14 01:24:58.994864 [info] iptable_mangle support detected, adding fwmark for tables 2018-12-14 01:24:59.041228 [info] Docker network defined as 2018-12-14 01:24:59.092341 [info] Incoming connections port defined as 8999 2018-12-14 01:24:59.125368 [info] iptables defined as follows... -------------------- -P INPUT DROP -P FORWARD ACCEPT -P OUTPUT DROP -A INPUT -i tun0 -j ACCEPT -A INPUT -s -d -j ACCEPT -A INPUT -i eth0 -p udp -m udp --sport 443 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8082 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --sport 8082 -j ACCEPT -A INPUT -s -i eth0 -p tcp -m tcp --dport 8999 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A OUTPUT -o tun0 -j ACCEPT -A OUTPUT -s -d -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --dport 443 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --dport 8082 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8082 -j ACCEPT -A OUTPUT -d -o eth0 -p tcp -m tcp --sport 8999 -j ACCEPT -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A OUTPUT -o lo -j ACCEPT -------------------- Adding 100 group groupadd: GID '100' already exists Adding 1000 user 2018-12-14 01:25:00.392667 [info] UMASK defined as '002' 2018-12-14 01:25:00.432182 [info] Starting qBittorrent daemon... Logging to /config/qBittorrent/data/logs/qbittorrent-daemon.log. 2018-12-14 01:25:01.468780 [info] qBittorrent PID: 213 2018-12-14 01:25:01.472233 [info] Started qBittorrent daemon successfully... thanks for helping out and for all the great work!
  14. Hi everyone! Just wanted to say hello and thank everyone for sharing their workmanship on this product! This is such a well crafted product and works perfectly for the z820 I scalped with 128 gibson (sic) memory.