Jump to content

dhstsw

Members
  • Posts

    103
  • Joined

  • Last visited

Posts posted by dhstsw

  1. 19 hours ago, biggiesize said:

    Odd, it was working before the upload to CA. Thanks for pointing it out. I'll check and see what is wrong.

     

    **UPDATE** So somehow before uploading I accidentally flipped one of the ports from udp to tcp. I have fixed it in the XML and tested it. The new working version should be uploaded soon.

     

    Thanks.


    If it is the SHADOWSOCKS_PORT_UDP i corrected it but still no joy.

    C.

  2. Hi,

     

    no matter what i do i get

     image.png.9a43810f094e0479dbea2daee129f812.png

    With everything configured correctly.

    What's weird is that if i run the command via shell:
     

    docker run -it --rm -e VPNSP=purevpn -e OPENVPN_USER=user -e OPENVPN_PASSWORD=password -e VPN_SERVER_HOSTNAME=servehostname --privileged qmcgaw/gluetun


    It works (tho, assigning a random server and not the one selected).

    Also, it doesn't download the settings.json in the /gluetun path (i left the one from the template), even if the log states it does.

    Thanks.

     

  3. 45 minutes ago, Vr2Io said:

    Nope.

     

    In AMD platform, it relate PCIe come from CPU or Chipset, if come from CPU then it should natively in standalone group. So you need those motherboard with divide x16 to two x8 feature.

    As @squid said, you need try software ACS setting even BIOS have/haven't enable ACS.


    Thanks for the answer, but now i'm more confused than ever.
    I found this reddit post with a guy with the same motherboard i have (and a CPU without embedded GPU, so i guess it's one of the 24 pcie lanes ones) wich shows
    20 IOMMU groups:


    Anyway, i'll try ACS in unraid a report back.

    Thanks.

    EDIT: even with ACS enabled on Unraid, no joy.

  4. 9 hours ago, Squid said:

    You can also try one of the ACS override options to try and break apart the IOMMU groups

    As an aside, you *may* have problems with using the Marvel controller on your drives.

     

    Thanks for the suggestion but ACS is already active (via bios) - the effect is the very same as activating it in the VM options in Unraid.

    C.

  5. Hi,
    I'll try to explain as simple as i can.

    My server runs on a Gigabyte B450 Aorus M board with a Ryzen 2400G CPU (12 pcie lanes).
    I disabled in bios all the things i don't need (serial, the embedded GPU and so on), ACS enabled (in bios).

    PCIexpress slots are populated by 1 Gfx card and a SATA3 card (4 sata ports), so i still have one PCIe slot free.

    Problem is, whatever i insert in the free pcie slot (i tried some 'spare' cards i have, like a Firewire one and another sata3 controller) it always ends up in one of the 'common' IOMMU groups (likely, the one handled by the chipset), so i can't passthrough what i plug there to any VM (actually, the only thing fully 'isolated' i can safely passthroug is the GPU).

    Total number or IOMMU groups is 10 (from 0 to 9).

    The question is: if i upgrade the CPU with one with more PCIe lanes (eg: the 3700x has 24 PCIe lanes) is there any chance i'll get more IOMMU groups (and maybe get the card i plug in in an isolated one)?

    More in dept:
    I'm running a VM with Windows 10 and an audio plugins server (AudioGridder), wich basically streams back to the client (my main computer) the audio streams generated by the hosted plugins. It works, but packet losses are too frequent to be accettable/usable. Isolating the cores used by the VM leads to no differences, as stopping everything else (dockers, vms).
    o i guess the problem relies to some overhead introduced in the  Virtio (or Virtio-net) bridging (i tried both).
    Even tried to update the virtio drivers to no avail.
    If i boot the sever directly into windows (the installation is on a dedicated SSD), bypassing Unraid, it works perfectly.
    So, my idea was to buy a PCIe gigabit card and fit it in the free PCIe slot to pass it throug the VM, bypassing the (supposed) virtio overhead.
    But, being the situation what it is, i can't.

    For reference, below is my System Devices situation (with the 'target' pci-e slot not populated; if i put anything there it ends in IOMMU group 8.

    Thanks to anyone who can give me an idea.
    image.thumb.png.0fe20e59809ae5a428db54bdcc7144ad.png


     

  6. Hi,

    What i see with:
    docker images --format "{{.ID}}\t{{.Size}}\t{{.Repository}}" | sort -k 2 -h

    is that many containers seems to be "duplicated"?

    EG:

    9424a2614fcc    108MB   haugene/transmission-openvpn
    c6cd37583653    114MB   haugene/transmission-openvpn
    a682fba409df    120MB   haugene/transmission-openvpn
    f5d02a66d972    121MB   haugene/transmission-openvpn

    b4746e5938dc    2.26GB  onlyoffice/documentserver
    c58d07454e56    2.35GB  onlyoffice/documentserver
    289798f72e62    2.45GB  onlyoffice/documentserver


    And so on.
    Any chances to fix that wihtout portainer?

    Thanks.

     

     

  7. Hi all,

    i'm having a similar issue on 6.9.0-rc2.

    Briefly, Fix Common Problems keeps reporting i'm running out of space on the docker.img:

     

    Quote

    Docker image file is getting full (currently 88 % used)

     

    If i run the "CONTAINER SIZE" script from the Docker page i get this:

     

    Quote

    Total size                          15.1 GB       772 MB       304 MB


    Docker.img was 20Gb, so i incremented its size to 30Gb.

    BUT, still, FIx Common Problems keeps reporting the same (88% of usage).

    I already deleted dangling images, without any results.

    Any idea?

    thanks.

  8. 17 hours ago, testdasi said:

    If there's a use case for a VPN VM over VPN docker, it's yours.

    Docker networking is extremely complex and I haven't seen anything that works in terms of making a VPN docker become a gateway.

     

    Also I fixed the bug so you don't need to add extra space to the ovpn now.

    Thanks for everything :)

    C.

    • Thanks 1
  9. 4 minutes ago, testdasi said:

    Is your VPN running on port 80? That is a bit unusual.

    In your OVPN config file, add 3 spaces before 80. (so it's blabla.com 80 -> blabla.com   80). See if it works.

    Well, that was weird.
    With the 3 spaces it does work. Thanks!

    One question: would it be possible to use the container as a gateway?
    Currently i'm using an ubuntu server vm configured as in the video below (with mods).

    That's because using an openvpn as a proxy the service i try to connect to detects the VPN (i guess there are leaks). With a gateway everything works as intended.

    Thanks.

     

    • Haha 1
  10. OpenVPN AIO Client not working here, looks like it can't find the openvpn.ovpn file (wich is where it's supposed to be).
    Lots of errors in log file:

    [info] Set up nftables rules
    [info] Flusing ruleset
    RTNETLINK answers: File exists
    [info] Added route 10.0.1.0/24 via 10.0.1.2 dev eth0
    [info] Editing ruleset
    [info] Apply rules
    /nftables.rules:11:36-39: Error: Could not resolve service: Servname not found in nft services list
    
    add rule ip filter INPUT tcp sport om80 counter accept
    ^^^^
    /nftables.rules:24:37-40: Error: Could not resolve service: Servname not found in nft services list
    [info] Set up nftables rules
    [info] Flusing ruleset
    RTNETLINK answers: File exists
    [info] Added route 10.0.1.0/24 via 10.0.1.2 dev eth0
    [info] Editing ruleset
    [info] Apply rules
    /nftables.rules:11:36-39: Error: Could not resolve service: Servname not found in nft services list
    
    add rule ip filter INPUT tcp sport om80 counter accept
    ^^^^
    /nftables.rules:24:37-40: Error: Could not resolve service: Servname not found in nft services list
    
    add rule ip filter OUTPUT tcp dport om80 counter accept
    ^^^^
    [info] All rules created
    
    [info] Quick block test. Expected result is time out. Actual result is [removed]
    
    [info] Setting up OpenVPN tunnel
    [info] Create tunnel device
    [info] Allow DnS-over-TLS for openvpn to lookup VPN server
    Error: Could not process rule: No such file or directory
    
    add rule ip filter INPUT tcp sport 853 counter accept
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    Error: Could not process rule: No such file or directory
    
    add rule ip filter OUTPUT tcp dport 853 counter accept
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    [info] Connecting to VPN on port om80 with proto tcp...
    [info] Your VPN public IP is [removed]
    [info] Block DnS-over-TLS to force traffic through tunnel
    Error: Could not process rule: No such file or directory
    
    list table filter
    ^^^^^^
    Error: syntax error, unexpected newline, expecting number
    
    delete rule filter INPUT handle
    ^
    Error: Could not process rule: No such file or directory
    
    list table filter
    ^^^^^^
    Error: syntax error, unexpected newline, expecting number
    
    delete rule filter OUTPUT handle
    ^
    [info] Change DNS servers to 10.0.1.3
    [info] Adding 10.0.1.3 to /etc/resolv.conf
    [info] Allowing DNS lookups (tcp, udp port 53) to server '10.0.1.3'
    Error: Could not process rule: No such file or directory
    
    add rule ip filter INPUT ip saddr 10.0.1.3 tcp sport 53 ct state established counter accept
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    Error: Could not process rule: No such file or directory
    
    add rule ip filter OUTPUT ip daddr 10.0.1.3 tcp dport 53 ct state new,established counter accept
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    Error: Could not process rule: No such file or directory
    
    add rule ip filter INPUT ip saddr 10.0.1.3 udp sport 53 ct state established counter accept
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    Error: Could not process rule: No such file or directory
    
    add rule ip filter OUTPUT ip daddr 10.0.1.3 udp dport 53 ct state new,established counter accept
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    [info] Done
    
    [info] Run danted in background on port 9118
    Sep 11 14:41:12 (1599824472.472185) danted[88]: error: /etc/danted.conf: problem on line 2 near token "tun0": could not resolve hostname "tun0": Name or service not known. Please see the Dante manual for more information
    
    Sep 11 14:41:12 (1599824472.472260) danted[88]: alert: mother[1/1]: shutting down

    Any hint?

  11. 32 minutes ago, saarg said:

    If you look at our blog post about modifying our containers, you can add the cron tab file at each boot of the container so that it is persistent over updates. The downside of that is you will loose any updates we do regarding the cronjob.

     

    You can modify the root file in /etc/crontabs/ and set the time to when your container runs.

     

    Found it.
    Thanks!

  12. 15 hours ago, saarg said:

    If you turn off your server at night the certs will not renew. Tha cron job is run at 2 in the night.

     

    Have you checked in the browser that the current cert is expiring?

    So, i left the ports redirected and the container working and indeed, the certificates renewed.

    Thanks.

    I have a script that starts letsencrypt (ad the containers using it) at 10:00 in the morning and turn them off at 14:00, basically for when i need them. Also, i usually keep ports 80 and 443 un-redirected, i don't really want to keep them open to the internet without the need for them.

    Is it there any way to configure the hour the cronjob runs (any variable for the docker? i checked on the github but didn't find anything).

    Thanks again.

  13. 3 hours ago, aptalca said:

    Don't manually run commands inside the container and don't manually delete key files unless we ask you to. We don't provide support for that.

    I did that after having the container not updating the keys (received email from letsencrypt stating certs are expiring in 20 days).

    Anyway, i keep a backup of all the appdata folder, keys and certs are the way they used to be.
    Never the less, it's not updating.

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 01-envfile: executing...
    [cont-init.d] 01-envfile: exited 0.
    [cont-init.d] 10-adduser: executing...
    usermod: no changes
    
    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/
    
    
    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    Variables set:
    PUID=99
    PGID=100
    TZ=Europe/Athens
    URL=somedomain.com
    SUBDOMAINS=nextcloud,oo
    EXTRA_DOMAINS=
    ONLY_SUBDOMAINS=true
    DHLEVEL=2048
    VALIDATION=http
    DNSPLUGIN=
    [email protected]
    STAGING=
    
    2048 bit DH parameters present
    SUBDOMAINS entered, processing
    SUBDOMAINS entered, processing
    Only subdomains, no URL in cert
    Sub-domains processed are: -d nextcloud.somedomain.com -d oo.somedomain.com
    E-mail address entered: [email protected]
    http validation is selected
    Certificate exists; parameters unchanged; starting nginx
    [cont-init.d] 50-config: exited 0.
    [cont-init.d] 99-custom-files: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-files: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
    
    Server ready

     

  14. So, i guess it's the usual problem:
     

    root@cc3c920d7a5b:/# certbot renew
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Processing /etc/letsencrypt/renewal/nextcloud.somedomain.com.conf
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Cert is due for renewal, auto-renewing...
    Plugins selected: Authenticator standalone, Installer None
    Renewing an existing certificate
    Performing the following challenges:
    http-01 challenge for nextcloud.somedomain.com
    http-01 challenge for oo.somedomain.com
    Cleaning up challenges
    Attempting to renew cert (nextcloud.somedomain.com) from /etc/letsencrypt/renewal/nextcloud.somedomain.com.conf produced an unexpected error: Problem binding to port 80: Could not bind to IPv4 or IPv6.. Skipping.
    All renewal attempts failed. The following certs could not be renewed:
      /etc/letsencrypt/live/nextcloud.somedomain.com/fullchain.pem (failure)
    
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    
    All renewal attempts failed. The following certs could not be renewed:
      /etc/letsencrypt/live/nextcloud.somedomain.com/fullchain.pem (failure)
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    1 renew failure(s), 0 parse failure(s)

    Of course ports 80 and 443 are forwarded correcly to the container.

     

    Deleting certificates and key and try to get new ones leads it to generate them, download them and then saying there the same error (and of course, the new ones don't work):
     

    Generating new certificate
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator standalone, Installer None
    Obtaining a new certificate
    IMPORTANT NOTES:
    - Congratulations! Your certificate and chain have been saved at:
    /etc/letsencrypt/live/nextcloud.somedomain.com-0002/fullchain.pem
    Your key file has been saved at:
    /etc/letsencrypt/live/nextcloud.somedomain.com-0002/privkey.pem
    Your cert will expire on 2020-04-30. To obtain a new or tweaked
    version of this certificate in the future, simply run certbot
    
    again. To non-interactively renew *all* of your certificates, run
    "certbot renew"
    - If you like Certbot, please consider supporting our work by:
    
    Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
    Donating to EFF: https://eff.org/donate-le
    
    ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

     

    Any hint?

    Thanks.
     

  15. On 11/26/2019 at 6:01 AM, Ashe said:

    cd /boot/config/telegram
    Nano chatid
    Add chatid number to file
    Ctrl-x
    Save

    Hi,

    in my /boot/config there's no telegram folder.
    I made it nonetheless and did the chatid file inside with the group chat id (of course) but with no avail.

     

    found this:

    /boot/config/plugins/dynamix/telegram

     

    Inside there's the chatid, with a 9 digits number.

    The chatid for the group is a 9 digits too but with a "g" at the start (10 characters in total).

    Modified with both the "g" and without, no results.


    Any hints?

    Thanks.

    EDIT: solved by myself. It needs the "-" symbol before the number, something like:

    -123456789

     

    Cheers.

  16. So, i tried to set up my telegram for notificartion, i created the bot and got the "token to access HTTP API.
    Configured in Unrad notifications:
    image.png.3f8ef8b2bf28dd358202806c8afade23.png
     

    Disabled it, re-enabled it.
    Test.

    Nothing.

    Any clue?

    Thanks.

    EDIT: some hour after it started working.

  17. HI,

    i'm really interested in this container but i must admit that after reading here (and the github for it) i still don't understand anything how to run it (i'm also a really fresh newbie in linux).
    Could you please make a little step by step guide with example files (configs and so on) how to run it? Expecially using an audio card (built in or not) to output the sound?
    Thanks.

×
×
  • Create New...