-
Posts
434 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Mainfrezzer
-
-
Mit was wurden denn die Konfigurationsdatei auf dem 1&1 Server erstellt?
[Interface] Address = 10.8.0.1/24 ListenPort = 51820 PrivateKey = # packet forwarding PreUp = sysctl -w net.ipv4.ip_forward=1 # port forwarding #Plex PreUp = iptables -t nat -A PREROUTING -i ens192 -p tcp --dport 32400 -j DNAT --to-destination 10.8.0.2:32400 PostDown = iptables -t nat -D PREROUTING -i ens192 -p tcp --dport 32400 -j DNAT --to-destination 10.8.0.2:32400 # packet masquerading PreUp = iptables -t nat -A POSTROUTING -o ens192 -j MASQUERADE PostDown = iptables -t nat -D POSTROUTING -o ens192 -j MASQUERADE [Peer] PublicKey = AllowedIPs = 10.8.0.2/32
Sollte so ähnlich aussehen.
-
Welches Plex Template wird denn benutzt? Das von Binhex und Linuxserver haben HOST als Standard drin.
D.h. der Container muss dann statt auf Host, auf Custom: wg1 laufen. Dazu muss manuell noch ein Portmapping von 32400 auf 32400 angelegt werden (ansonsten muss man nämlich, wie oben erwähnt, die Docker Brücken IP nutzen)
Das wars dann schon.
Der Tunnel für Docker-Container zum Server
Container mit Portmapping und VPN-Tunnel-Netzwerk
Test auf Funktion, lokal (10.0.0.2 ist die Lan Adresse vom Unraid-System bei mir):
Funktioniert.
Container kann das Internet erreichen
Nun zur Einstellung am VPN-Server im Internet
Endgültiger Test:
Tada -
Kommt alles auf die Konfiguration vom 1&1 Server an. Das Wireguard-Interface dort muss selbstverständlich sämtliche Regeln beinhalten. Was, wie, wo mit welchem Paket passieren soll.
Für mich klingt das jetzt stark nach WG-Easy von der Beschreibung her, und das ist erstmal gut um "schell" nen ausgehenden Server bereit zu stellen. Wenn man Anfragen an den 1&1-VPN-Server weiterleiten möchte, zum Unraid-Server (bzw Plex-Container), muss man das ganze schon manuell konfigurieren.
Als Beispiel:PreUp = iptables -t nat -A PREROUTING -i ens192 -p tcp --dport 80 -j DNAT --to-destination 10.8.0.2:32400 PostDown= iptables -t nat -D PREROUTING -i ens192 -p tcp --dport 80 -j DNAT --to-destination 10.8.0.2:32400
Wobei ens192 das Interface vom 1&1 Server ist und --to-destination die VPN IP vom Unraid-Interface.
Wobei man bei dem Port drauf achten muss, wie der über Unraid weitergeleitet wird, weil sonst müsste man die WG-1-Docker-Bridge IP selbst anfragen um z.b. nativ zu Port, z.b. 172.31.202.2:32400
Man kann das ganze aber auch via Reverse Proxy lösen, was wahrscheinlich sinnvoller wäre in dem Fall hier.
Z.b. via NGINX-Container auf dem 1&1 Server laufen lassen, Anfragen von "Was-auch-immer.com" an die, ich nehme mal an, "10.8.0.2:32400" senden und dann hat sich das ganze auch gehabt. Da muss man dann nicht groß mit iptables rumdoktern. -
seems like it. You can of course, install the docker container manually or, try the unraid template from the repo if you want
https://github.com/tquizzle/Docker-xml/blob/master/docker-pufferpanel.xml -
You can work around it/"fix it" with https://hub.docker.com/_/postgres at the bottom of the page.
-
@Kieran E i noticed that the unraid template isnt updated with the recently added settings for the concurrent workers and ipv6 support.
-
1 hour ago, adriaurora said:
I've tried:
docker network connect <bridge> <container-name>
without success. It creates another network interface (can see it via ifconfig) but creates it in the first place and traefik seems to stop working correctly.
Thats normal, apart from that, since you have been sparse with what "stop working correctly" is, i assume its connection broke, im gonna assume thats due to the changed default route. For some reason, docker has a wonky alphabetical-like order system. If you attach the network "bridge" it can cause it to take priority as default route, while something like the wgX interfaces might attach to it without that priority. You would have to fix that issue manually inside the container on starts up each time.
For anyone else interested into that, via dockerman you can add
"&& docker network connect NETWORKNAME CONTAINERNAME" in post arguments -
@ich777i tested with a couple of delays, 20, isnt enough 30 seconds is. I think its way easier to do a file check in the startup script and if it doesnt exist, touch it into existence😅
-
5 minutes ago, ich777 said:
Maybe it‘s because it can‘t start the server quickly enough.
I‘ve upped the time to 10 seconds, that should hopefully fix the issue.
The file still is not being created, i checked it on my usual J4125 system.
-
12 minutes ago, ich777 said:
This is strange but why should the file be missing?
Maybe this old CPU is not quick enough to start the application in 5 seconds...
Ive no clue. When you install the container new, completely fresh, without the appdata section prior existing. The file doesnt get created and it just loops and loops and loops.
-
1 hour ago, hi2hello said:
I would like to uninstall / get rid of duckdns completey but do fail. I deleted the container but as I do no longer have access to my account (although I have email and token, I get an login error that states "wrong email or token"; while I am able to use the exact same credentials for connectiong to duckdns over other ways than persona singin), the old address is still pointing to my server.
Any idea, or contact address or any helpful hint on how I could get rid of the duckdns address pointing to my server?
Thank you so much!
well, there are a couple of ways:
[email protected]
[email protected]
or https://groups.google.com/g/duckdns
Edit, since you can actually still use the api token, you can also push a force ip update out like
curl -sS --max-time 60 "https://www.duckdns.org/update?domains=YOUR-DOMAIN-NAME&token=YOUR-TOKEN&clear=true"
Checked their docs again. If you need to clear ipv6 as well, you can also run or open it like this
https://www.duckdns.org/update?domains={YOURVALUE}&token={YOURVALUE}[&ip=0.0.0.0][&ipv6=::][&verbose=true][&clear=true]
Edit: Just tested it, the curl one i send first does clear ipv6 as well. Docs were a bit uncertain with their wording- 1
-
I think the issue is actually that it has an issue with a missing file.
it crashed because it couldnt find the ConanSandbox.log file in /serverdata/serverfiles/ConanSandbox/Saved/Logs/
@Ashilder create that "ConanSandbox.log" file in your Appdata share,
I assume the "default":"/mnt/cache/appdata/conanexiles/ConanSandbox/Saved/Logs"
Edit: yeah without that file the container keeps restarting on my systems. With that file in place it, starts on a i3-4130 without an issue
-
Youre probably using IPVLAN as Docker network, that doesnt work with Fritzbox. You need to use either the Macvlan work around OR if youre on 6.12.11 you can just switch to macvlan in docker as the bug should be fixed
-
Totally forgot to check 6.12.11 😅
But as stated, it does indeed run with the 6.12.10 one. Nothing has changed.
-
Mhmm. Alternatively I could see a solution via virtual hard disks. You could create individual virtual hard disks, or just one giant one that lives on that share. You could mount it on a machine you usually work on and encrypt it and then use that virtual encrypted disk to save the disk images on. All you would have left on that share would be a virtual encrypted hard disk or multiple.
Otherwise checking the bash history file for events would be an option, trigger the removal after a certain amount of inactivity. Logging events in syslog would work as well I reckon.
I just remembered something.
Its absolutely not meant for this but, you could use the docker container psitransfer. It's basically meant as file sharing container but it has a feature of automatic file deletion after a certain time. Using it locally to upload the images on that share would work in a way to dispose of the files after a given time.
-
Im surprised as to why the alarm is set on something so low as 60 degrees
Well, usually you do maintenance at night cause, the typical human being tends to sleep at that time and thus does not interrupt any playback that happens during the day. Just disable the shedule in plex and youre good, lol -
49 minutes ago, bim85 said:
Wie finde ich denn am besten heraus welcher Prozess etc. dafür verantwortlich ist?
Wireshark
-
yes there is a way, but its not persistent accross reboots and you would have to integrate that into a script within the go file.
basically what you have to change is the OpenTerminal.php in /usr/local/emhttp/plugins/dynamix/include/ to change the execution code bash to whatever you want the button to do. -
39 minutes ago, ryans100 said:
Thanks. I had the Anonymize diagnostics checked, but I guess that didn't matter.
Gotta check the system/ps file for running programs with variables in the parameter. That issue should be fixed in the future as far as I know.
- 1
-
80 for http and 443 for https.
-
@ryans100delete that attachment from your diagnostics and be absolutely careful for the next upload. There are login credentials visible
-
If youre using tailscale, depening on the setup you wanna have a look at https://tailscale.com/kb/1081/magicdns
Since i dont use tailscale at all, i dont know how its configured and if it even fetches remote dns server.
Also, youre aware that you need to include the port in the domain name? because npm is running on a non standard port, you always have to call "example.home:1880" or "example.home:18443" for https
OR if you, as you said, decided to run NPM on br0 in your lan "example.home:8080" or "example.home:4443" for https
theres no port forwarding magic without portforwarding.
Edit: Added some pictures, works like a charm
NPM Setting:
DNS-Rewrite:
Result:
-
On 7/14/2024 at 9:47 AM, ich777 said:
Can you be a bit more specific what‘s not working?
Does the container not get a IPv6 or can you not reach it?
The container just start with ipv4 and the fe80:: address. They do not get a GUA or ULA Address.
Edit: i upgraded straight back to 7 and this is how it looks:
Edit: Found a fix for the issue. Change your LXC config from
lxc.net.0.type = veth lxc.net.0.flags = up lxc.net.0.link = br0 lxc.net.0.name = eth0
to
lxc.net.0.type = macvlan lxc.net.0.flags = up lxc.net.0.link = br0 lxc.net.0.name = eth0
and its working again under unraid 7.
Edit-Edit: drawback of course is that the veth enabled communication between the host and container, that doesnt work with macvlan and requires the host access enabled
Edit-Edit-Edit.
Quoteip6tables -P FORWARD ACCEPT
Will resolve the issue with veth. The default policy for FORWARD changed from ACCEPT to DROP
- 2
-
10 minutes ago, ich777 said:
What network type are you using? Bridge and IPVLAN or no bridge and MACVLAN?
I noticed while running on a macvlan bridge. Then i swapped back to ipvlan bridge, dummy-saved the change in lxc and still nothing. Only a downgrade to 6.12.10 brought the ipv6 functionality back. ( i didnt bother checking beta1)
VPN Docker Plex, freigabe ins Internet
in Deutsch
Posted · Edited by Mainfrezzer
Präzisiert
Den MTU, vom Wireguard interface, beim 1&1-Server von 1500 auf 1420 runtersetzen und bei Unraid könnte man 1384 angeben.