Jump to content

xorinzor

Members
  • Content Count

    39
  • Joined

  • Last visited

Community Reputation

1 Neutral

About xorinzor

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ah, I didn't realize they already counted towards the limit. Nevermind then.
  2. If the drive would be part of the array, either by extending it, or functioning as a cache drive, I would agree with you. However, I'm just using it as a separate drive to store my config files on because I want to prevent my HDD's spinning up unnecessarily. I feel like the device count only counts (and should stay that way) towards the array functionality, as that's what UnRaid is mainly all about. If this plugin would be integrated with unraid and they'd start counting towards the device limit, a lot of people of the community will get very unhappy. And reasonably so.
  3. I figured I'd post my results here since this is the first result in google to come up for this issue. If you have used "Community Applications" to install docker containers (ie: the Apps tab), it is safe to remove the docker image file. After you have deleted the docker image file and create a new one and start your docker service back up, you can head back to the Apps tab and click on the list icon in the top-left to go to previous installed apps as jonathanm explains. When these are installed, all settings as you had previously configured your containers will be restored. The only thing you may have to fix are custom made docker networks and properly configure the autostart for your containers, as these will all be turned on by default. Other then that, you lose absolutely nothing. EDIT: Worth noting, it looks like it doesn't use the "Post Arguments" when building the container. For these to be applied you will have to edit the container, and just press "apply" again.
  4. I created a post in the support thread for UD, referencing this post and asking if the same SMART attribute monitoring toggle config can be added. As long as it's a separate plugin it makes sense that this wouldn't be something Unraid could do anything about. Thanks for the help!
  5. Regarding: With the normal disks mounted within unraid itself you can enable/disable the monitoring of certain SMART values. The mounted disks via Unassigned Devices don't have this option, and seem to follow the default settings as set in "UnRaid: settings -> disk settings". Would it be possible to implement this same SMART attribute monitoring menu like in unraid where these can be toggled on/off ?
  6. Looks like that workaround will do for now! Of course a more proper way of doing this would be even better, but at the moment I'm no longer getting spammed warning notifications Thanks!
  7. I'll try that out, thanks. As long as it works, even as a workaround, I'll be happy. It's turning kind of into a situation where the server screams "wolf" at the moment
  8. Turns out, this doesn't work in my specific case, because I mounted the SSD using the "Unassigned Devices" plugin. When I click on the device name, it doesn't show the SMART attributes like the other drives do
  9. Amazing! That'll do! Thanks!
  10. I would really like the ability to exclude devices from the notifications that unraid sends. For example, I have an SSD installed, and will (on every write operation) get emails about it having bad sectors, which I know, but don't prioritize as needing a replacement since it only contains config files and is working fine so far. The only option for me to stop unraid from sending me those emails right now however, is by disabling warnings entirely, which would prevent unraid from sending me "legitimate" warnings about my array disks. Having a dropdown list next to each notification entity where you could either whitelist, or blacklist, devices would really be a massive help in managing these notifications more specifically. Even though this might generally not be needed.
  11. So it turns out that, creating this network, caused Docker to add the following rules to the iptables on the host, preventing traffic. Chain DOCKER-ISOLATION-STAGE-1 (1 references) num target prot opt source destination 1 DROP all -- !10.10.0.0/16 anywhere 2 DROP all -- anywhere !10.10.0.0/16 Removing these 2 rules caused everything to work as intended. I presume these got added because of the "--internal" parameter upon network creation. Interestingly, you'd think that by removing these, the "--internal" parameter would be obsolete, but setting the gateway on the container, to that of the host, doesn't give the container internet access. Setting it back to the VPN container, does (via the VPN). EDIT: it looks like removing these rules did give containers in the default bridge network the ability to ping containers inside this "internal" vpn network though.. guess more research is needed. EDIT 2: Looks like all "--internal" does is add these iptables rules if I am to believe a serverfault user.
  12. I double checked just to be sure, it's set to privileged. The gateway is set correctly (and are set automatically using the post-parameter field executing a script). I confirmed this using "ip route" which shows the default gateway to be configured to 10.10.0.2
  13. I created an internal network using "docker network create --internal --subnet=10.10.0.0/16 vpn" with the goal to use my VPN docker container as gateway, and have all internet traffic of other containers in this network to flow through this gateway. Unfortunately for whatever reason the container just won't forward the traffic, and I'm not sure why. Tried many different tutorials, stackoverflow posts, etc. but none work. Even though when I used "--link" instead of the internal network bridge, it worked great. (I should add that I configured the correct gateway IP on the containers, and they can ping eachother just fine too). Are there any limitations to using the "--internal" parameter? Tried multiple iptables commands, different openvpn "redirect-gateway" options, but really at a loss right now. The openvpn container (gateway) has the following setup: eth0 - internal network - 10.10.0.2 eth1 - default bridge - 172.17.0.2 tun0 - vpn connection - <dynamic ip> and the other client only has: eth0 - internal network - 10.10.0.3 Tried using these iptables commands: iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT as well as using: echo 1 > /proc/sys/net/ipv4/ip_forward Which I confirmed by getting the output, to be sure this got set to "true". Right now, the containers can ping eachother, and the gateway container has access to the internet (confirmed via the vpn connection by using ipify.org). The other container is unable to ping 1.1.1.1 or 8.8.8.8 and can only ping 10.10.0.3. Using "ip route get 1.1.1.1" shows the route is set correctly, as it tries to contact the gateway container.
  14. xorinzor

    Docker container networking issues

    Quite busy lately and forgot about this topic. It's still very much an issue and would really like some advice.
  15. I have setup my squid-proxy container to connect to my VPN by using the extra parameter: --net="container:vpn" In combination with Network-type: none, to make sure no data leaks to the public connection outside of the VPN. Which works absolutely great, but one issue I've run into is that if, for whatever reason, I need to restart my VPN container the connection won't be re-established between these containers. I'll have to make sure to restart the squid-proxy container after this as well. And if both are stopped, I need to make sure that the VPN container is started before the squid-proxy container is started in order for the connection to be established. Not being very experienced with docker, I'm wondering if there is a better way of doing this. Or perhaps a way of doing the same thing, where the connection will be automatically re-established if it detects that the container has started again.