Jump to content

Mainfrezzer

Members
  • Posts

    434
  • Joined

  • Last visited

Posts posted by Mainfrezzer

  1. Mit was wurden denn die Konfigurationsdatei auf dem 1&1 Server erstellt?

     

    [Interface]
    Address = 10.8.0.1/24
    ListenPort = 51820
    PrivateKey =
    
    # packet forwarding
    PreUp = sysctl -w net.ipv4.ip_forward=1
    
    # port forwarding
    
    #Plex
    PreUp = iptables -t nat -A PREROUTING -i ens192 -p tcp --dport 32400 -j DNAT --to-destination 10.8.0.2:32400
    PostDown = iptables -t nat -D PREROUTING -i ens192 -p tcp --dport 32400 -j DNAT --to-destination 10.8.0.2:32400
    
    # packet masquerading
    PreUp = iptables -t nat -A POSTROUTING -o ens192 -j MASQUERADE
    PostDown = iptables -t nat -D POSTROUTING -o ens192 -j MASQUERADE
    
    [Peer]
    PublicKey = 
    AllowedIPs = 10.8.0.2/32


    Sollte so ähnlich aussehen.
     

  2. Welches Plex Template wird denn benutzt? Das von Binhex und Linuxserver haben HOST als Standard drin.

    D.h. der Container muss dann statt auf Host, auf Custom: wg1 laufen. Dazu muss manuell noch ein Portmapping von 32400 auf 32400 angelegt werden (ansonsten muss man nämlich, wie oben erwähnt, die Docker Brücken IP nutzen)

    Das wars dann schon.

    Der Tunnel für Docker-Container zum Server
    vpnsetting.thumb.PNG.ffdf82ba5ad2030f8cf68e1e465b4782.PNG

    Container mit Portmapping und VPN-Tunnel-Netzwerk
    vpnsetting.PNG.d233c59ef2c1318ae02dfa60b191a417.PNG

    Test auf Funktion, lokal (10.0.0.2 ist die Lan Adresse vom Unraid-System bei mir):
    vpnsetting.thumb.PNG.f3b8d0e21e254e0f5c247d4accddc6a1.PNG

    Funktioniert.

    Container kann das Internet erreichen

    vpnsetting.PNG.25dddc1c25f939874ee3202873b9249e.PNG

    Nun zur Einstellung am VPN-Server im Internet

    vpnsetting.PNG.033b69cd163e87fc8033ad709beb7978.PNG


    Endgültiger Test:
    vpnsetting.thumb.PNG.ca85294c216bc1e3cdb6c937bb168438.PNG

    Tada

  3. Kommt alles auf die Konfiguration vom 1&1 Server an. Das Wireguard-Interface dort muss selbstverständlich sämtliche Regeln beinhalten. Was, wie, wo mit welchem Paket passieren soll.
    Für mich klingt das jetzt stark nach WG-Easy von der Beschreibung her, und das ist erstmal gut um "schell" nen ausgehenden Server bereit zu stellen. Wenn man Anfragen an den 1&1-VPN-Server weiterleiten möchte, zum Unraid-Server (bzw Plex-Container), muss man das ganze schon manuell konfigurieren.


    Als Beispiel:

    PreUp = iptables -t nat -A PREROUTING -i ens192 -p tcp --dport 80 -j DNAT --to-destination 10.8.0.2:32400
    PostDown= iptables -t nat -D PREROUTING -i ens192 -p tcp --dport 80 -j DNAT --to-destination 10.8.0.2:32400



    Wobei ens192 das Interface vom 1&1 Server ist und --to-destination die VPN IP vom Unraid-Interface.
    Wobei man bei dem Port drauf achten muss, wie der über Unraid weitergeleitet wird, weil sonst müsste man die WG-1-Docker-Bridge IP selbst anfragen um z.b. nativ zu Port, z.b. 172.31.202.2:32400


    Man kann das ganze aber auch via Reverse Proxy lösen, was wahrscheinlich sinnvoller wäre in dem Fall hier.

    Z.b. via NGINX-Container auf dem 1&1 Server laufen lassen, Anfragen von "Was-auch-immer.com" an die, ich nehme mal an, "10.8.0.2:32400" senden und dann hat sich das ganze auch gehabt. Da muss man dann nicht groß mit iptables rumdoktern.

  4. 1 hour ago, adriaurora said:

     

     

    I've tried:

    docker network connect <bridge> <container-name>

    without success. It creates another network interface (can see it via ifconfig) but creates it in the first place and traefik seems to stop working correctly.

     

    Thats normal, apart from that, since you have been sparse with what "stop working correctly" is, i assume its connection broke, im gonna assume thats due to the changed default route. For some reason, docker has a wonky alphabetical-like order system. If you attach the network "bridge" it can cause it to take priority as default route, while something like the wgX interfaces might attach to it without that priority. You would have to fix that issue manually inside the container on starts up each time.

    For anyone else interested into that, via dockerman you can add 
    "&& docker network connect NETWORKNAME CONTAINERNAME" in post arguments

  5. 12 minutes ago, ich777 said:

    This is strange but why should the file be missing?

    Maybe this old CPU is not quick enough to start the application in 5 seconds...

    Ive no clue. When you install the container new, completely fresh, without the appdata section prior existing. The file doesnt get created and it just loops and loops and loops.

  6. 1 hour ago, hi2hello said:

    I would like to uninstall / get rid of duckdns completey but do fail. I deleted the container but as I do no longer have access to my account (although I have email and token, I get an login error that states "wrong email or token"; while I am able to use the exact same credentials for connectiong to duckdns over other ways than persona singin), the old address is still pointing to my server. 

     

    Any idea, or contact address or any helpful hint on how I could get rid of the duckdns address pointing to my server?

     

    Thank you so much!

    well, there are a couple of ways:

    [email protected]

    [email protected]

    or https://groups.google.com/g/duckdns


    Edit, since you can actually still use the api token, you can also push a force ip update out like

     

    curl -sS --max-time 60 "https://www.duckdns.org/update?domains=YOUR-DOMAIN-NAME&token=YOUR-TOKEN&clear=true"



    Checked their docs again. If you need to clear ipv6 as well, you can also run or open it like this

     

    https://www.duckdns.org/update?domains={YOURVALUE}&token={YOURVALUE}[&ip=0.0.0.0][&ipv6=::][&verbose=true][&clear=true]


    Edit: Just tested it, the curl one i send first does clear ipv6 as well. Docs were a bit uncertain with their wording

    • Like 1
  7. I think the issue is actually that it has an issue with a missing file.

    it crashed because it couldnt find the ConanSandbox.log file in /serverdata/serverfiles/ConanSandbox/Saved/Logs/


    @Ashilder create that "ConanSandbox.log" file in your Appdata share,

    I assume the "default":

    "/mnt/cache/appdata/conanexiles/ConanSandbox/Saved/Logs"


    Edit: yeah without that file the container keeps restarting on my systems. With that file in place it, starts on a i3-4130 without an issue
     

  8. Mhmm. Alternatively I could see a solution via virtual hard disks. You could create individual virtual hard disks, or just one giant one that lives on that share. You could mount it on a machine you usually work on and encrypt it and then use that virtual encrypted disk to save the disk images on. All you would have left on that share would be a virtual encrypted hard disk or multiple.

     

     

    Otherwise checking the bash history file for events would be an option, trigger the removal after a certain amount of inactivity. Logging events in syslog would work as well I reckon.

     

     

     

     

    I just remembered something.

     

    Its absolutely not meant for this but, you could use the docker container psitransfer. It's basically meant as file sharing container but it has a feature of automatic file deletion after a certain time. Using it locally to upload the images on that share would work in a way to dispose of the files after a given time.

  9. Im surprised as to why the alarm is set on something so low as 60 degrees

    Well, usually you do maintenance at night cause, the typical human being tends to sleep at that time and thus does not interrupt any playback that happens during the day. Just disable the shedule in plex and youre good, lol

  10. yes there is a way, but its not persistent accross reboots and you would have to integrate that into a script within the go file.

    basically what you have to change is the OpenTerminal.php in /usr/local/emhttp/plugins/dynamix/include/ to change the execution code bash to whatever you want the button to do.

  11. If youre using tailscale, depening on the setup you wanna have a look at https://tailscale.com/kb/1081/magicdns
    Since i dont use tailscale at all, i dont know how its configured and if it even fetches remote dns server.

    Also, youre aware that you need to include the port in the domain name? because npm is running on a non standard port, you always have to call "example.home:1880" or "example.home:18443" for https
    OR if you, as you said, decided to run NPM on br0 in your lan  "example.home:8080" or "example.home:4443" for https

    theres no port forwarding magic without portforwarding.


    Edit: Added some pictures, works like a charm


    Container.PNG.3d212875e3a616bfc39175a603a9cefb.PNG
    NPM Setting:
    NPMSetting.PNG.1b3895e05279a9786df17b886813515c.PNG
    DNS-Rewrite:
    DNS.PNG.b8f6aa9d51d9da349ef49be48a7c126b.PNG
    Result:
    Result.PNG.ab618a9995e96eacaa5f067d40e849e1.PNG

  12. On 7/14/2024 at 9:47 AM, ich777 said:

     

     

    Can you be a bit more specific what‘s not working?

    Does the container not get a IPv6 or can you not reach it?

     

    The container just start with ipv4 and the fe80:: address. They do not get a GUA or ULA Address. 


    Edit: i upgraded straight back to 7 and this is how it looks:
    upgrade.thumb.PNG.5625ef4ff45e0304baa1038ab36eb14d.PNG

    upgradelxc.thumb.PNG.f5c49b399fbf63d014d19bb06344a5f3.PNG


    Edit: Found a fix for the issue. Change your LXC config from

     

    lxc.net.0.type = veth
    lxc.net.0.flags = up
    lxc.net.0.link = br0
    lxc.net.0.name = eth0


    to

     

    lxc.net.0.type = macvlan
    lxc.net.0.flags = up
    lxc.net.0.link = br0
    lxc.net.0.name = eth0

    and its working again under unraid 7.

    Edit-Edit: drawback of course is that the veth enabled communication between the host and container, that doesnt work with macvlan and requires the host access enabled :/


    Edit-Edit-Edit. 


     

    Quote

    ip6tables -P FORWARD ACCEPT

    Will resolve the issue with veth. The default policy for FORWARD changed from ACCEPT to DROP
     

    • Like 2
  13. 10 minutes ago, ich777 said:

     

     

    What network type are you using? Bridge and IPVLAN or no bridge and MACVLAN?

     

    I noticed while running on a macvlan bridge. Then i swapped back to ipvlan bridge, dummy-saved the change in lxc and still nothing. Only a downgrade to 6.12.10 brought the ipv6 functionality back. ( i didnt bother checking beta1)

×
×
  • Create New...