Jump to content

Adamm

Members
  • Posts

    11
  • Joined

  • Last visited

Posts posted by Adamm

  1. On 2/18/2023 at 5:43 AM, binhex said:

    it did, thanks. can you try pulling down latest, this should get it working for you.

     

    Still getting this issue on my QNAP TVS-672XT running QuTS Hero h5.0.1.2277.

     

    [skynet@SkynetNAS ~]$ lsmod
    Module                  Size  Used by    Tainted: P
    xt_ipvs                16384  0
    ip_vs_rr               16384  0
    ip_vs_ftp              16384  0
    ip_vs                 139264 11 xt_ipvs,ip_vs_rr,ip_vs_ftp
    xt_nat                 16384  8
    xt_addrtype            16384  6
    vfio_iommu_type1       36864  0
    vhost_net              24576  1
    vhost                  40960  1 vhost_net
    vhost_iotlb            16384  1 vhost
    macvtap                16384  0
    macvlan                28672  1 macvtap
    tap                    24576  2 vhost_net,macvtap
    tun                    49152  3 vhost_net
    virtio_scsi            20480  0
    virtio_pci             28672  0
    virtio_net             49152  0
    net_failover           20480  1 virtio_net
    failover               16384  1 net_failover
    virtio_mmio            16384  0
    virtio_console         28672  0
    virtio_blk             20480  0
    virtio_balloon         20480  0
    virtio_rng             16384  0
    virtio_ring            28672  8 virtio_scsi,virtio_pci,virtio_net,virtio_mmio,virtio_console,virtio_blk,virtio_balloon,virtio_rng
    virtio                 16384  8 virtio_scsi,virtio_pci,virtio_net,virtio_mmio,virtio_console,virtio_blk,virtio_balloon,virtio_rng
    kvm_intel             225280  6
    kvm                   516096  1 kvm_intel
    thunderbolt_icm        49152  0
    fbdisk                 36864  0
    rfcomm                 69632  0
    ksmbd                 135168  0
    usdm_drv               94208  0
    intel_qat             286720  1 usdm_drv
    uio                    20480  1 intel_qat
    iscsi_tcp              20480  0
    libiscsi_tcp           28672  1 iscsi_tcp
    libiscsi               53248  2 iscsi_tcp,libiscsi_tcp
    scsi_transport_iscsi    90112  4 iscsi_tcp,libiscsi_tcp,libiscsi
    zscst_vdisk           483328  0
    scst                  815104  1 zscst_vdisk
    cfg80211              397312  0
    br_netfilter           24576  0
    bridge                172032  1 br_netfilter
    stp                    16384  1 bridge
    bonding               163840  0
    dummy                  16384  0
    xt_connmark            16384  2
    xt_NFLOG               16384  5
    ip6table_filter        16384  1
    ip6_tables             24576  1 ip6table_filter
    xt_conntrack           16384  7
    xt_TCPMSS              16384  0
    xt_LOG                 16384  0
    xt_set                 16384 15
    ip_set_hash_netiface    45056  1
    ip_set_hash_net        45056 11
    ip_set                 40960  3 xt_set,ip_set_hash_netiface,ip_set_hash_net
    xt_MASQUERADE          16384 14
    xt_REDIRECT            16384  0
    iptable_nat            16384  1
    nf_nat                 36864  5 ip_vs_ftp,xt_nat,xt_MASQUERADE,xt_REDIRECT,iptable_nat
    xt_policy              16384  0
    xt_mark                16384 10
    8021q                  28672  0
    ipv6                  475136 161 br_netfilter,bridge,[permanent]
    uvcvideo              106496  0
    videobuf2_v4l2         24576  1 uvcvideo
    videobuf2_vmalloc      16384  1 uvcvideo
    videobuf2_memops       16384  1 videobuf2_vmalloc
    videobuf2_common       45056  2 uvcvideo,videobuf2_v4l2
    snd_usb_caiaq          49152  0
    snd_usb_audio         262144  0
    snd_usbmidi_lib        28672  1 snd_usb_audio
    snd_seq_midi           16384  0
    snd_rawmidi            32768  3 snd_usb_caiaq,snd_usbmidi_lib,snd_seq_midi
    fnotify                61440  1
    nfsd                 1208320  1 fnotify
    udf                   114688  0
    isofs                  45056  0
    iTCO_wdt               16384  1
    vfio_pci               61440  0
    irqbypass              16384  4 kvm,vfio_pci
    vfio_virqfd            16384  1 vfio_pci
    vfio                   28672  2 vfio_iommu_type1,vfio_pci
    exfat                  77824  0
    ufsd                  794624  1
    jnl                    32768  1 ufsd
    cdc_acm                32768  0
    pl2303                 24576  0
    usbserial              40960  1 pl2303
    qm2_i2c                16384  0
    zfs                  8581120 21 scst
    icp                   393216  1 zfs
    lpl                   159744  4 zscst_vdisk,scst,zfs,icp
    i2c_imc                20480  0
    intel_ips              24576  0
    drbd                  413696  0
    lru_cache              16384  1 drbd
    flashcache            167936  0
    dm_tier_hro_algo       24576  0
    dm_thin_pool          229376  1 dm_tier_hro_algo
    dm_bio_prison          24576  1 dm_thin_pool
    dm_persistent_data     81920  1 dm_thin_pool
    hal_netlink            16384  0
    atlantic              266240  0
    r8152                 221184  0
    usbnet                 36864  0
    mii                    16384  1 usbnet
    igb                   225280  0
    e1000e                245760  0
    mv14xx                651264  0
    mpt3sas               368640  0
    scsi_transport_sas     40960  1 mpt3sas
    raid_class             16384  1 mpt3sas
    qla2xxx_qzst          831488  0
    scsi_transport_fc      57344  1 qla2xxx_qzst
    k10temp                16384  0
    coretemp               16384  0
    uas                    28672  0
    usb_storage            69632  4 uas
    xhci_pci               16384  0
    xhci_hcd              184320  1 xhci_pci
    usblp                  24576  0
    uhci_hcd               45056  0
    ehci_pci               16384  0
    ehci_hcd               81920  1 ehci_pci

     

    [skynet@SkynetNAS ~]$ sudo iptables -S
    -P INPUT ACCEPT
    -P FORWARD ACCEPT
    -P OUTPUT ACCEPT
    -N CSFORWARD
    -N DOCKER
    -N DOCKER-ISOLATION-STAGE-1
    -N DOCKER-ISOLATION-STAGE-2
    -N DOCKER-USER
    -N QUFIREWALL
    -N SYSDOCKER
    -N SYSDOCKER-ISOLATION-STAGE-1
    -N SYSDOCKER-ISOLATION-STAGE-2
    -N SYSDOCKER-USER
    -A INPUT -m state --state NEW -j QUFIREWALL
    -A FORWARD -j DOCKER-USER
    -A FORWARD -j DOCKER-ISOLATION-STAGE-1
    -A FORWARD -o lxcbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -o lxcbr0 -j DOCKER
    -A FORWARD -i lxcbr0 ! -o lxcbr0 -j ACCEPT
    -A FORWARD -i lxcbr0 -o lxcbr0 -j ACCEPT
    -A FORWARD -j SYSDOCKER-USER
    -A FORWARD -j SYSDOCKER-ISOLATION-STAGE-1
    -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -o docker0 -j SYSDOCKER
    -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
    -A FORWARD -i docker0 -o docker0 -j ACCEPT
    -A FORWARD -j CSFORWARD
    -A OUTPUT -m set --match-set BRNOIPSET src,dst -j DROP
    -A CSFORWARD -i lxdbr0 -o lxdbr0 -j ACCEPT
    -A CSFORWARD -i lxcbr0 -o lxcbr0 -j ACCEPT
    -A CSFORWARD -i docker0 -o docker0 -j ACCEPT
    -A CSFORWARD -o docker0 -m conntrack --ctstate INVALID,NEW -j DROP
    -A CSFORWARD -o lxcbr0 -m conntrack --ctstate INVALID,NEW -j DROP
    -A CSFORWARD -o lxdbr0 -m conntrack --ctstate INVALID,NEW -j DROP
    -A DOCKER -d 10.0.3.3/32 ! -i lxcbr0 -o lxcbr0 -p tcp -m tcp --dport 8989 -j ACCEPT
    -A DOCKER -d 10.0.3.5/32 ! -i lxcbr0 -o lxcbr0 -p tcp -m tcp --dport 6767 -j ACCEPT
    -A DOCKER -d 10.0.3.2/32 ! -i lxcbr0 -o lxcbr0 -p tcp -m tcp --dport 9696 -j ACCEPT
    -A DOCKER -d 10.0.3.7/32 ! -i lxcbr0 -o lxcbr0 -p tcp -m tcp --dport 7878 -j ACCEPT
    -A DOCKER -d 10.0.3.8/32 ! -i lxcbr0 -o lxcbr0 -p tcp -m tcp --dport 8920 -j ACCEPT
    -A DOCKER -d 10.0.3.8/32 ! -i lxcbr0 -o lxcbr0 -p tcp -m tcp --dport 8096 -j ACCEPT
    -A DOCKER -d 10.0.3.8/32 ! -i lxcbr0 -o lxcbr0 -p udp -m udp --dport 7359 -j ACCEPT
    -A DOCKER -d 10.0.3.8/32 ! -i lxcbr0 -o lxcbr0 -p udp -m udp --dport 1900 -j ACCEPT
    -A DOCKER-ISOLATION-STAGE-1 -i lxcbr0 ! -o lxcbr0 -j DOCKER-ISOLATION-STAGE-2
    -A DOCKER-ISOLATION-STAGE-1 -j RETURN
    -A DOCKER-ISOLATION-STAGE-2 -o lxcbr0 -j DROP
    -A DOCKER-ISOLATION-STAGE-2 -j RETURN
    -A DOCKER-USER -j RETURN
    -A QUFIREWALL -i lxdbr0 -j ACCEPT
    -A QUFIREWALL -i docker0 -j ACCEPT
    -A QUFIREWALL -i lxcbr0 -j ACCEPT
    -A QUFIREWALL ! -i lo -m set --match-set PSIRT.ipv4 src -j NFLOG --nflog-prefix  "RULE=4 ACT=DROP"
    -A QUFIREWALL ! -i lo -m set --match-set PSIRT.ipv4 src -j DROP
    -A QUFIREWALL ! -i lo -m set --match-set TOR.ipv4 src -j NFLOG --nflog-prefix  "RULE=5 ACT=DROP"
    -A QUFIREWALL ! -i lo -m set --match-set TOR.ipv4 src -j DROP
    -A QUFIREWALL -s 192.168.1.0/24 -i eth0 -j ACCEPT
    -A QUFIREWALL -s 10.8.0.0/24 -i eth0 -j ACCEPT
    -A QUFIREWALL -s 192.168.1.0/24 -i qvs0 -j ACCEPT
    -A QUFIREWALL ! -i lo -j NFLOG --nflog-prefix  "RULE=9 ACT=DROP"
    -A QUFIREWALL ! -i lo -j DROP
    -A SYSDOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j SYSDOCKER-ISOLATION-STAGE-2
    -A SYSDOCKER-ISOLATION-STAGE-1 -j RETURN
    -A SYSDOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
    -A SYSDOCKER-ISOLATION-STAGE-2 -j RETURN
    -A SYSDOCKER-USER -j RETURN

     

  2. 5 hours ago, rbrosnan said:

    Hello,

    I'm having a mighty struggle over here getting binhex qBittorrentVPN to work with Sonarr. Files download fine using the VPN, but Sonarr is giving the infamous "Import failed, path does not exist" error.

     

    I'm using the linuxserver docker of Sonarr, but I tried the binhex variant during my hours of troubleshooting today to no avail.

     

    I'm trying to avoid using remote mappings on Sonarr, but I would happily use them if it meant things would work - but thus far they have not.

    Here's my current config, which isn't working (though I have SABnzbd working flawlessly):

     

    I've changed the /data container path to /downloads (though I've also, as mentioned, tried leaving it to /data and using remote mappings on Sonarr and that didn't work either - more on that later).

     

    qbittorrent "downloads":

    container path: /downloads

    host path: /mnt/user/downloads/complete

     

    qbittorrent "incomplete-downloads"

    container path: /incomplete-downloads

    host path: /mnt/user/downloads/incomplete

     

    qbittorrent webui paths:

    Default Save Path: /downloads

    Keep incomplete torrents in: /incomplete-downloads

     

    sonarr "downloads"

    container path: /downloads

    host path: /mnt/user/downloads/complete

     

    It's my understanding that since the qbittorrent "downloads" host path matches the sonarr "downloads" host path I shouldn't have to do any remote mappings. Is this correct?

     

    If so, I'm at a complete loss as to why the import is failing.

     

    Prior to this current config, I was messing around with more common directory structures such as:

    qbittorrent "data":

    container path: /data

    host path: /mnt/user/downloads/

     

    qbittorrent webui paths:

    Default Save Path: /data/complete

    Keep incomplete torrents in: /data/incomplete

     

    sonarr "downloads"

    container path: /downloads

    host path: /mnt/user/downloads/complete

     

    sonarr webui remote mapping:

    remote path: /data/complete (also tried /data even though I thought it wouldn't work)

    local path: /downloads (also tried /downloads/complete and after changing the host path to /mnt/user/downloads)

     

    I've included logs, and my current config as it stands... which doesn't work. Any ideas? I've been through previous forum posts all day but haven't really found my exact set up so I'm really in need for a helping hand.

     

    Thank you.

     

    qbittorrent-dirs.PNG

     

    qbittorrent-seeding.PNG

     

    qbittorrent-dockermappings.PNG

     

    sonarr-dockermappings.PNG

     

    qbittorrent-logs.PNG.2fe02f3c2fd63a37a7a3ea9af147ecab.PNG

     

    sonarr-logs..thumb.PNG.c9a7e0ffd6f3c7bfe729607054c6efba.PNG

     

     

    Its a bug with the latest qbittorrent, revert to "binhex/arch-qbittorrentvpn:4.2.5-1-12"

  3. On 2/29/2020 at 3:14 AM, binhex said:

    im not too sure whats going on either, as for me its completely working, try connecting to endpoint 'sweden', as that is what im connected to, see if port forwarding is operational there.

     

    So I finally found the issue after spending too many hours investigating and pulling my hair out. For reference, the reason the port was being shown as closed is because PIA will only show open when data is flowing through it, so because my port was never being changed in qBittorrent this never applied.

     

     

    Anyway.... the issue turns out to be how you interact with qBittorrents API in qbittorrent.sh, the curl commands you use don't support https nor self signed certs.

     

    curl -i -X POST -d "json={\"listen_port\": ${VPN_INCOMING_PORT}}" "http://localhost:${WEBUI_PORT}/api/v2/app/setPreferences" &> /dev/null

     

    The above command will first fail if the user has HTTPS enabled as you specifically query http://localhost:(etc), this needs to be changed to dynamically tell if HTTPS is enabled.

     

    Secondly the request will fail if the user has a self signed cert;

     

    curl: (60) SSL certificate problem: self signed certificate
    More details here: https://curl.haxx.se/docs/sslcerts.html
    
    curl failed to verify the legitimacy of the server and therefore could not
    establish a secure connection to it. To learn more about this situation and
    how to fix it, please visit the web page mentioned above.

     

    To fix this you will also need to add the -k flag ( -k, --insecure      Allow insecure server connections when using SSL) to allow curl to accept self signed certs.

     

     

    With that being said, I tested the changes locally and everything works smoothly. Hope this helps. 

     

     

    Edit; Made a pull request for convenience

  4. 49 minutes ago, binhex said:

    im not too sure whats going on either, as for me its completely working, try connecting to endpoint 'sweden', as that is what im connected to, see if port forwarding is operational there.

     

    Tried 5 different endpoints, still no dice. I put in a ticket to PIA see if they have any idea why the ports aren't being opened, how frustrating.

  5. 30 minutes ago, binhex said:

    there is a time limit for hitting the PIA API, you will not be able to communicate with the API after 2 mins of establishing the vpn tunnel, i would suspect when you are running your code its after that 2 min period.

     

    Right, wasn't aware of that limitation. In any case now the API is returning a port, it never seems to actually get opened. I also confirmed this behavior on my router directly using a separate OpenVPN instance.

     

    skynet@RT-AX88U-DC28:/tmp/home/root# curl --interface tun11 icanhazip.com
    45.12.220.212
    skynet@RT-AX88U-DC28:/tmp/home/root# curl -4 --interface tun11 http://209.222.18.222:2000/?client_id=365732bef95553e634f41e19dba0e3cdfa0d65ef7979f89dba5654d43b0a275j
    {"port":36870}

     

    5aY2Tpw.png

     

     

    2020-02-29 02:56:09,464 DEBG 'start-script' stdout output:
    [info] Curl successful for http://209.222.18.222:2000/?client_id=6ff18107ffc4a1244594fea6d80e344ed8ccf0c8ef75c864c151a4e2f7f91812, response code 200
    
    2020-02-29 02:56:09,495 DEBG 'start-script' stdout output:
    [info] Successfully assigned incoming port 47431
    
    2020-02-29 02:56:10,456 DEBG 'start-script' stdout output:
    [info] Successfully retrieved external IP address 172.98.67.91

     

    TbUZhxC.png

     

     

     

    Not quite sure what to make of the whole situation as it seems port forwarding on their end is broken?

  6. Seems like the API for assigning ports is failing for me, when I SSH into the container and issue the commands manually;

     

    [root@6da3ac31a1e6 root]# client_id=`head -n 100 /dev/urandom | sha256sum | tr -d " -"`
    [root@6da3ac31a1e6 root]# echo $client_id
    fcff19a59ef4d909cb7b21de116ffea6258988b53d090c823b4efc36c45076d6
    [root@6da3ac31a1e6 root]# curl --interface tun0 http://209.222.18.222:2000/?client_id=fcff19a59ef4d909cb7b21de116ffea6258988b53d090c823b4efc36c45076d6
    curl: (56) Recv failure: Connection reset by peer
    [root@6da3ac31a1e6 root]# curl --interface tun0 icanhazip.com
    45.12.220.184

     

    Any idea why this might be happening?

  7. 5 minutes ago, binhex said:

    although this LOOKS like a fix, it isnt, 4.2.1.1-03 had a bug in it in that the port checking code wasnt working correctly and thus it always reported port was open even when it wasnt (corrected in 4.2.1.1-04). 

     

    so i assume you still have the same issue (port closed) you just dont know it, way to prove this is to do the following:-

     

    1. open /config/supervisrod.log file with your fav editor

    2. find string 'Successfully retrieved external IP address' and note the ip address - ensure its the last match in the log

    3. find string 'Successfully assigned incoming port' and note the port number - ensure its the last match in the log

    4. open web browser and go to https://www.yougetsignal.com/tools/open-ports/ and enter in ip and port from steps 2, and 3. and click on 'check'

     

    i would assume you will find the port is closed.

     

    Ah your right, the port is closed. So it looks like the port is never being opened in the first place as the result is the same after restarting the docker image. 

  8. 2 hours ago, binhex said:

    ok i will spin up a container from this image later today and see if i can replicate the issue.

     

    So I went through my Sonarr bot logs and noticed this issue occurred exactly a week ago, the same day 4.2.1-1-04 was tagged. I reverted back to 4.2.1-1-03 to experiment and the issue is resolved. Seems to be a problem with that particular build.

  9. 1 hour ago, binhex said:

    well it looks like its working as expected and for whatever reason the port is being closed, its possible there are issues with that endpoint, try another port forward enabled endpoint, preferably a non canadian one for a start and see how that goes.

     

    speeds will drop to 0, this is to prevent ip leakage whilst the tunnel is torn down and re-created, but should not require a restart of the container, as can be seen in your log snippet its coded in such a way that it will re-create the tunnel and get a fresh incoming port with no manual intervention required, i.e. restarting the container.

     

    I tried with a different endpoint, same result. After 30 minutes the port was marked as closed and once OpenVPN restarted torrents were unable to establish connections;

     

    Fu8lFQF.png

     

     

    Here is the full log;

     

    https://pastebin.com/SecBFFBL

     

  10. So I've been having an issue the last week or so after successfully using this image (latest branch) on my QNAP TVS-672XT NAS the past year. Every 30 minutes the watchdog is marking the incoming port as closed and restarting OpenVPN etc. To make things worse, once this happens all connections from torrents fail (0 seeders 0 peers) until the docker container is manually restarted.

     

    I'm currently using PIA with the Toronto endpoint so everything should be fine there. Below is a snippet of the end of my log;

     

    https://pastebin.com/taChzBLm

     

    Let me know if you require any other info.

     

×
×
  • Create New...