sebstrgg

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by sebstrgg

  1. Hi all, I'm not sure this is the right part of the forum to post, so mod, please move in case you find a more suitable place. I just wanted to share my solution on the topic which I hope can help someone else! My problem: I want to utilize a container with a VPN client as a docker network, to enable chosen containers to always be connected through the VPN connection. I tried all possible solutions found here and other places to make this work over time, but I always ended up having problems with having to manually restart these containers. I've tried the Rebuild-DNDC container on multiple occassions, but as it's primarily made to be use in combination with gluetun, and I unfortunately didn't get it to catch soft failures of containers properly. Solution: I put my (very) limited code production skills to use with the help of AI and created a bash script to be used with a cronjob with the User Scripts plugin. I wanted to share my now (for me) confirmed working solution as it's been running for two weeks straight without any hiccups. You can find the script here: https://github.com/sebstrgg/unraid-restart-vpn-containers I'm open to any suggestions, feedbacks, pull requests etc. to make the script more user friendly and resilient. There might be obvious sub-optimizations as I don't have that much skill in coding overall and AI made a majority of the code with a lot of revisions through prompts from my part.
  2. I actually discovered this late last night going through the iptables of the different dockers. Great work anyhow, your dockers are actually really impressive! Since you're already gooing around the iptables ruleset, is there any chance to add the ability to add custom iptables rows? The reason I see a need for this is due to the fact that running plex behind a vpn is troublesome unless you're allowed to setup NAT to and from a specific port (a open port in your vpn -> plex port). This works perfectly when entering the line manually and the problem is that I don't see a way to apply it automatically upon start of container/connection to vpn since your startup-script clears the .ovpn config from any potentially conflicing lines. The iptables rule I'm using to make this work is: iptables -t nat -I PREROUTING -p tcp --dport [forwarded vpn port] -j REDIRECT --to-ports [plex port, i.e. 32400]
  3. Hey, Since I'm routing other dockers network via this docker, I'd like to do an addition to the iptables rule list (nat) pointing from an open external port via my vpn provider to a port attached to the docker. However, I'm unable to do so as the docker automatically clears any changes done to for example the .ovpn file (tried adding "up" command). Is there any way to add an iptables row to be automatically applied on startup/connection to vpn?
  4. I get the exact some problem. I've been trying to route multiple docker containers (firefox, xteve, hydra2) via binhex-delugevpn and binhex-sabnzbdvpn. Although I've done everything exactly as specified, I still end up with the dockers that are being routed not being accessible over my local network, but I can verify that the dockers are live and functioning towards the internet via the vpn connection. I did notice a strange behaviour when checking open ports on the vpn-container (netstat -plnt) to verify that the ports were open, and sometimes they are, sometimes they aren't and sometimes they only seem to map up over ipv6. I really don't know enough about docker- and unraid-specific docker networking to troubleshoot it any further so any help would be greatly appreciated. EDIT: I was running Unraid 6.8.3-stable with this behaviour and updated to 6.9.0-beta1 to verify that the problem is still occuring, which is is.
  5. Yes you're right, but "move" is not the problem, I need to "copy" completed files.
  6. I'm essentially only trying to achieve what the plugin does - copy a completed file to another folder. Nothing special tbh
  7. Hey, I'm trying to get the Copy Completed plugin working, but as far as I've gathered right now, I understand that there's an issue with the Python version and the plugin mainly being unmaintained. (The plugin only exists for python2.6, python2.7 while the container uses python 3.7) Is there anyone that have this plugin working? If so, how did you get it working? If not, do you use any other plugin to achieve the same end goal? Thanks /S
  8. Not a problem at all! I'm happy to help You're perfectly right (or rather the other user) and the inital problems with the containers is officially fixed! I got it working right away! Thank you!
  9. Ok, so I just re-did the procedure and ended up with the same error as before, but now with an additional error. Full Run log: https://wl7r.me/KIBh8R (This was tested just now, after your second update if I understood the previous posts correctly)
  10. Hi, I run into this error message in the unraid log: Oct 19 23:39:41 unraid kernel: wmiir[25554]: segfault at 0 ip 000014e2189bf676 sp 00007ffc2fe8cfe8 error 4 in libc-2.24.so[14e21893f000+195000] It seems like it's related to a USB device I'm trying to passthrough to a docker container (ConBee II to marthoc/deconz container) when I try to boot the container. Anyone that could help me decipher the error message as this is what I assume is related to my unability of getting deconz work correctly? Thanks, Sebastian
  11. Hi again, My setup is identical to yours, except for the IS_CREATION_ENABLED and CONNECT_WITH_FRANZ variables set to true. The reason I did try to add PUID/PGID was only due to the error message I ended up with and I've already fully removed the container, appdata-folder and re-added the container. I could try it again if you believe it would make any difference. Run log (with modified PUID/PGID): https://wl7r.me/TIMnQA Run log (as per standard container setup): https://wl7r.me/iRCmDH Screenshot of container settings: https://wl7r.me/uLT2ou EDIT: So I did it again, I removed the container + image and deleted the appdata-folder agian. I just changed the DB string to correspond with my DB-server. New run log: https://wl7r.me/nkFjoD I don't know if its relevant, but I do run Unraid v6.8.0 rc1, which have a newer docker version. EDIT2: I retried the recreation of the container as I noticed the search path for /config was "/mnt/user/appdata" instead of "/mnt/cache/appdata" like on the the /recipes and /database. No change in the error type though. EDIT3: Same error when changing search path for /config, /recipes, /database to "/mnt/user/appdata" in the beginning of the path. EDIT4: I use your container that is provided in the Unraid CA repo (https://wl7r.me/EJaMD4)
  12. Hi, I've just tried to setup this docker container, which seems to fail based on the user permissions for NPM. npm ERR! code EUIDLOOKUP npm ERR! lifecycle could not get uid/gid npm ERR! lifecycle [ 'nobody', 0 ] npm ERR! lifecycle I've tried to set the container variables for PUID/PGID to 99/100 without any luck. Any suggestions on how to proceed? Thanks, Sebastian
  13. Hi, Thanks for your reply! 1) Ok, so what you're saying is that my letsencrypt docker which have the following settings: custom network: proxynet (172.18.0.0/16) -> which automatically gets bridged to host_ip:180/1443, won't be able to communicate with anything on my lan via the host ip, but only other dockers connected to the same internal network of the docker host? Is there any way I can make it possible to use my letsencrypter-docker for ALL things I'd like to proxy then? I'm not 100% sure of what you mean by the macvlan, but I suppose you're referring to the internal lan inside the docker host?
  14. Hi, I've just started using Docker with my transition to Unraid as I've up until now have been running the VMware suite, "full-VM setup" and a self-configured nginx reverse proxy setup for my public services. As I haven't used Docker before, there's probably something obvious I'm missing, but here goes; Two main problems/questions: 1. In the case where I'd like to proxy a docker that's NOT on the 'proxynet' but instead connected to one of my actual LAN-networks (i.e. VLAN50/subnet: 10.10.50.0/24), I'm not able to do this successfully. I've made sure that I've made the corresponding changes to the nginx proxy conf (i.e. home-assistant docker) and its 'upstream_address' (<docker-name> -> <local IP>). I'm able to access it via the <host_IP>:<docker_port>, and even though this is a match in the config-file, It still doesn't pass through the traffic properly (bad gateway). Any ideas of what I might be missing out on? (I've tried this with several docker-containers unsuccessfully). This is also highly relevant as I want to be able to proxy 'anything' with the letsencrypt-docker 2. How do you (others) have pfsense setup to port forward specific traffic, via a specific gateway to a docker-container with the 'proxynet' setup? (see my last bullet point below) Some background info: - letsencrypt-docker is confirmed working and is working very well - I configured it with the help of SpaceinvaderOne's Youtube-guide and the 'proxynet' setup - I use sub-domains and not sub-folders - All Dockers that are connected to my 'proxynet' are successfully being proxied - I have a fairly advanced local network which is segmented into different vlans - I run a pfsense instance with an active OpenVPN-client connected to a VPN-service, where I've excluded specific VM-traffic via changing the gateway specific traffic is running over, when the need have arised to use my public IP. I might've missed some crucial info that would help you to help me, but just let me know what you need if that's the case. There's a lot of new stuff I'm poking around with (unraid and docker) and although there's been some hiccups such as this, it's still a lovely experience. Great job with the letsencrypt-container, it's truly awesome!
  15. Well, it seems like I had a brainfart of some sort in regards of transfer speeds and how I tried to do it previously. I was doing the smb transfers via my desktop, which limited the throughput vastly, since my computer had to send/recieve all data as the middle-man in between my servers. I attached my current NFS shares directly to the unraid machine with the help of the unassigned devices add-on and re-started my transfers with rsync and suddenly everything works as expected with two simultaneous streams averaging (in total) of around 100MB/s, which is expected over a single 1GbE-link. I was never worried about the functionality of unRaid and wheter it would suit my needs, but it's really confusing coming from a full-blown vmware environment to a much more simplified workflow. Transfer jobs of this size is not normally done in my network as well. 🙂 Again, appreciate all your help - sometimes you just have to explain the problem to someone else, get a few questions back at you, to finally get your head thinking straight again.
  16. [ 5] local 10.10.5.195 port 5201 connected to 10.10.5.46 port 48672 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 107 MBytes 899 Mbits/sec [ 5] 1.00-2.00 sec 112 MBytes 939 Mbits/sec [ 5] 2.00-3.00 sec 112 MBytes 939 Mbits/sec [ 5] 3.00-4.00 sec 112 MBytes 939 Mbits/sec [ 5] 4.00-5.00 sec 112 MBytes 939 Mbits/sec [ 5] 5.00-6.00 sec 112 MBytes 939 Mbits/sec [ 5] 6.00-7.00 sec 112 MBytes 939 Mbits/sec [ 5] 7.00-8.00 sec 112 MBytes 939 Mbits/sec [ 5] 8.00-9.00 sec 112 MBytes 939 Mbits/sec [ 5] 9.00-10.00 sec 112 MBytes 939 Mbits/sec [ 5] 10.00-11.00 sec 112 MBytes 939 Mbits/sec [ 5] 11.00-12.00 sec 112 MBytes 939 Mbits/sec [ 5] 12.00-13.00 sec 112 MBytes 939 Mbits/sec [ 5] 13.00-14.00 sec 112 MBytes 939 Mbits/sec [ 5] 14.00-15.00 sec 112 MBytes 939 Mbits/sec [ 5] 15.00-16.00 sec 112 MBytes 939 Mbits/sec [ 5] 15.00-16.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-16.00 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-16.00 sec 1.77 GBytes 949 Mbits/sec receiver It's not a LAN problem. No worries, I know I was a bit unclear - but also new to UnRaid which makes things extra confusing. I've attached the diagnostics to this post. unraid-diagnostics-20190929-0955.zip
  17. Thanks for the feedback. I understand that a parity protected array won't be able to reach those speeds, but as mentioned twice now (and edited the main post) - I have NOT enabled a parity drive.
  18. Appreciate the feedback, but it seems like you missed an important part from my post: "Same Speed limitations occured when I tried with cache drive ON. While migrating I've not assigned a parity drive nor cache as I already have my data safe on my old setup. I just tried to switch around the setup of array to see if it could help out with speeds, but without any luck." I also realized that I missed to add that I have downloaded and enabled TurboWrites CA as well. There is NO difference in speed of the transfer with this enabled, nor any difference in the perceived read/write speeds of the array. My aforementioned 195MB/s speeds were referring to read/write speeds of the individual disks, not actual transfer speeds as these needs to be looked at separately. I have seen very close numbers even in the array (since I currently don't have a parity or cache drive added). I would expect transfer speeds of ~100MB/s due to bandwidth constraints as I'm utilizing a 1GbE connection between my unraid server and switch.
  19. Hi, I've just purchased a new server with the goal to minimize my homelab footprint and changing up(down?😉) to unraid from zfs. Before I can shut down the old machines I need to migrate all my data libraries. The thing is that I've ran into a few problems as I'm currently migrating my 13TB library to the new machine. -- 1. While transferring my data from my zfs share to my unraid share via smb I max out on approximately 45-55MB/s on large files. I've scoured the web and this forum for different solutions, but can't get better speeds. The smb protocol might be slow, but that's definitely not anywhere close to limits on a modern version of smb. I've also tried changing max protocol setting, trying to transfer over a disk share instead of user share and all to no use. 2. I haven't been able to reliably mount my share via nfs. Nfs have been activated, added on the share as Public and after verifying it via exportfs -v, I have been able to mount it a few times, but every time it succeeded, it timed out shortly after. Most of the times it times out when trying to mount the share. I've tried this on Ubuntu 16.04/18.04 and Debian 9 machines that all currently have working nfs shares mounted. -- The disks in my new server are verified for ~195MB/s read/write speeds. There's nothing wrong with the physical layer, no firewalls in between my vlans limiting specific type of traffic and no port configuration on the switch that may limit throughput. EDIT: I realized that I missed to add that I have downloaded and enabled TurboWrites CA as well. There is NO difference in speed of the transfer with this enabled, nor any difference in the perceived read/write speeds of the array. (Yes, I have verified that it's active) Same Speed limitations occured when I tried with cache drive ON. While migrating I've not assigned a parity drive nor cache as I already have my data safe on my old setup. I just tried to switch around the setup of array to see if it could help out with speeds, but without any luck. Unraid version: latest (6.7.2) Windows 10 version: latest (x64-1903) Unraid server specs: 1x Xeon Gold 5218 128GB DDR4 2400MHz ECC REG ASUS Z11PA-D8 Array for migrating (w/o old disks added): 5x WD 8TB White (WD80EZAZ) - 4x being used for store, 1x for parity 1x Intel 660p 1TB m.2 nvme - for cache Network consisting of: - Cisco 3750G - Pfsense I'll be happy to provide any additional info needed to troubleshoot the problem. Thanks /S