Zxurian

Members
  • Posts

    42
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Zxurian's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. Using through Unraid's native VPN manager would've been nice to counteract those slots. Currently, my Unraid box uses 2, one for qbit directly, and a second through VPN manager where all other containers are funneled through. From what I was able to gather, you might be able to do that with an OpenVPN configuration, but not with a WireGuard, since it's defined entirely through settings within Unraid. Also, I am _definitely_ not an expert, so if anyone else has more to say, by all means.
  2. no, I was never able to make it work. I resolved to just running the qbittorrent_vpn container on a regular bridge. Then within the container itself, setup the VPN connection (instructions are there). For every other container I have it using the `wg0` network as described above.
  3. When editing the container config, set Network Type to `wg0`. Also make sure that the `wg0` connection is active under your VPN setup.
  4. understood. I rebooted two days ago , but got a new warning this morning of Out of Memory again. Based on what you said, it sounds like a reboot should flush the log that was reporting the out of Memory errors, so _after_ a reboot, it shouldn't appear unless another Out of Memory event occurred? Latest diagnostics attached to try and isolate what is causing these all of a sudden. media-1-diagnostics-20231129-1245.zip
  5. thanks. Is there a way to tell if it's docker related? (I don't run any VMs). I just rebooted the Unraid server after 5 days straight of getting this warning (latest diagnostics before rebooting attached). Does the warning that Fix Common Problems displays mean that it's been Out of Memory for 5 days straight, or is it just seeing a single entry in the log file of an Out of Memory error, and because I didn't reboot for 5 days, just keeps seeing that same single entry, but displays a warning each day? media-1-diagnostics-20231127-0742.zip
  6. Started getting out of memory errors on my Unraid box. Per , I have not restarted yet, and am providing diagnostics here to figure out why suddenly getting Out of Memory errors. System is an R510 that has been running Unraid for ~3 years now(?) without issue. media-1-diagnostics-20231124-2027.zip
  7. I have a VPN created through Unraid's native VPN Manger with Peer type of access set to "VPN tunnel for docker containers only", tunnel name `wg0`. VPN tunnel works. I have multiple containers using this `wg0` for network access. They all work, and correctly go out to internet over tunnel (verified by Firefox container and ip check). Note: this is _not_ a question about which port to open the VPN itself on. Using hotio's qbittorrent-vpn container as a test (thanks @Davo1624 for helping), I've established that when the container creates it's _own_ VPN network (container connecting over bridge), then the qbit port is open and can be seen from the outside at the VPN exit address. If I set the container to _not_ use it's own VPN network, but instead use the `wg0` network created by Unraid's own native VPN Manager, then this port is closed. This tells me that while the container itself is reachable, the port itself needs to be open and forwarded on the VPN tunnel created by Unraid in order to pass through to the container. I have googled several hours, but I my google-fu is coming up empty on how to correctly setup a port forward on the VPN created using Unraid's native VPN Manager. I can't see anything within Unraid's native GUI in order to setup Port Forwarding, or what config files / settings do I need to look at?
  8. Thanks, that was the key. Ended up double checking my schedules and found I had auto-update scheduled at the same hour as Appdata backup. After fixing schedule conflict, no further errors.
  9. Running Unraid 6.11.5, CA Appdata Backup 2023.01.28 so this seems related to other posts I've seen here, but I don't have any exclusions for stopping containers. They're all set to stop, backup, then restart, but I've gotten errors the past two days about `Error while stopping container! Code: Container already started`, on a single different container each time, and then of course a verification difference on the affected container. Logs of past two nights attached. I can stop the containers manually without issue, so not sure why CA Appdata Backup is listing it as "already started" backup.log backup.log
  10. So I've had a working Unraid v6.11.5 with wireguard tunnel up (VPN for docker only) until today. Switched from xfinity to fronter, something that should have zero effect as far as Unraid is concerned. Powered down Unraid via GUI first, (needed to power down rack to route some cables), new WAN hooked up to router, powered Unraid server back on. No other changes other than new WAN DHCP IP acquired from router (OPNSense) After waiting for Unraid to fully boot up, my Wireguard Tunnel wg0 would say it was active, but no container set to use wg0 would actually send or receive data. Tried importing a new tunnel to test to see if it was just the wg0 config for some reason, but container also wouldn't work. Unraid itself was exhibiting some odd symptoms as well, Fix Common Problems would report that Unraid couldn't contact github.com (WAN was definitely up during this time). Attempted to just blast wg0 tunnels and start from scratch, I have just a wg0 with a fresh config loaded, and it _looks_ like it's active. The Problem is as soon as I enable docker, network problems start to occur. a basic ping to 1.1.1.1 stops working as soon as docker is enabled (screenshot showing as soon as I activated Docker, and remained unreachable until I disabled Docker), and that's without any containers that use the wg0 Tunnel running. Where should I look to figure out why Unraid networking suddenly went sideways after just switching WAN ips? (everything else on my network works) media-1-diagnostics-20230309-2320.zip
  11. Just had this occur to myself as well. Same symptoms as others have reported. Endless "Updating all Containers" loop. * Just upgraded to 6.11.5 this morning, no issues with upgrade * My docker install I have set to use directories, not a single docker img * Wen to Docker, clicked [check for updates], 6 containers with updatse * clicked [update all], update window/log appears * Initially pulled data for updates to containers, successfully restarted them, then went into a loop starting at the first container again and attempting to pull an update, found no changes, but continued to restart container, then move onto the next one. Diagnostics attached. Diagnostics were created from another tab while the system was still looping on trying to update the containers. I also saw that there was a container that wasn't updated (unfortunately don't know if it was part of the 6 that had updates earlier as I didn't catch it fast enough). Clicked "apply update" next to that container specifically and it only updated that one container, then showed [done] button. media-1-diagnostics-20221122-1213.zip
  12. Tried using plugin, didn't quite work for me. Went to remove it, but it won't let me fully remove the plugin. Plugin is still listed under Plugins and if I click checkbox to remove, I get the following error: how can I fully remove plugin from system?
  13. thanks for the suggestion @DBJordan, but no dice. For the sake of argument I did add the same ports I had in VPN_INPUT_PORTS on the binhex-privoxyvpn container to VPN_OUTPUT_PORTS, but still negative. I did discover that my prowlarr container can talk to my flaresolverr container, so at least two containers can talk to each other, but other containers still can't talk to each other, even though they're all using the same binhex-privoxy container network and all localhost references. Still at a loss to figure out why there's no communication. I tried leaving the binhex-privoxyvpn container on debug and tailing the log, but didn't see anything where it was blocking any traffic. Some containers are just timing out when trying to talk to each other over the binhex-privoxy container network in 6.10.3, where it was working fine in 6.9.4
  14. My environment is setup with binhex-privoxyvpn acting as VPN tunnel, then several containers using that container as it's network for outbound communication. The containers are also setup to talk to each other within the VPN network via localhost references (VPN FAQ #24-26). Everything was working correctly in 6.9.4 in terms of communication, *arr containers could communicate outbound via the VPN, and to each other without issues, I could access the webgui's of all via specified ports on the binhex-provixyvpn container options. I just updated from 6.9.4 to 6.10.3, and after the update (no issues during the update), the containers I have using the binhex-provixyvpn network are no longer able to communicate to each other. They can still communicate out through the VPN, and I can access the web GUI's for the containers, but any attempts for one container to use "localhost:<port>" to communicate to another container within the VPN network do not work. Within the *arr logs, they're all met with a "connection timeout" error. I double checked all of ports required, and they're all setup the way they were previous to 6.10.3. In regard to Q26 of FAQ, i'm not using a proxy connection on the containers so I don't _think_ it applies to me since they're connecting directly through the container network (correct me if I'm wrong). Tried changing Docker Network type from macvlan to ipvlan, but was still unable to get container to container communication working. Any suggestions to restore inter-container communication?
  15. Having an issue with replacing episodes with new copies. If it's the first time an episode is grabbed, Sonarr imports it from QBit without issue, this error is only occurring when it tries to replace an existing episode. The mappings work fine as far as I am aware, since new episodes import without issue. It doesn't happen with every episode either, some are replaced without issue, while others get the Destination Already Exists error. Specific error is NzbDrone.Common.Disk.DestinationAlreadyExistsException Full trace log for affected file: https://pastebin.com/PqptSrFc Permissions within Unraid for affected files are previous file: 666 nobody users new file: 666 nobody users Permissions from Docker container for affected files are previous previous file: 666 abc users new file: 666 abc users I also ssh'd into the docker itself, and manually ran cp to copy the new file over the old file, and it worked without issue, so I don't think it's a permission issue either. I did try googling for an answer, but couldn't find anything that fit the problem I'm experiencing, so starting here to see if it's an unraid docker issue before moving to Sonarr direct help.