Jump to content

binhex

Community Developer
  • Posts

    7,910
  • Joined

  • Last visited

  • Days Won

    37

Everything posted by binhex

  1. ip leakage could only occur if the application is running straight away at container start (privoxy and microsocks) which is not the case, as all checks have to be performed before the application can start, so no ip leakage can occur. see here for pre run check BEFORE application starts:- https://github.com/binhex/arch-privoxyvpn/blob/4a7f7ae8f4eac00762fa881a05b52f262d2e75e5/run/nobody/watchdog.sh#L14 link to script sourced in above line:- https://github.com/binhex/arch-int-vpn/blob/master/run/nobody/preruncheck.sh ping is useful for validation and debugging of connectivity.
  2. left click icon, select edit, set DEBUG to true
  3. you really need to have DEBUG turned on (set to true) when the issue happens and then post the /config/supervisord.log file for me to analyse exactly what the cause is.
  4. you really need to have DEBUG turned on (set to true) when the issue happens and then post the supervisord.log file for me to analyse exactly what the cause is.
  5. ok lets start with the most common issue, lan network, from your log:- LAN_NETWORK defined as '192.168.1.0/24' can you check this, Q4:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  6. correct, the reason is that the processes running inside of the container are not aware of any host port assignments, so you can change the port to anything and the process (in this case privoxy) isnt aware of that change, this is all managed by docker and is transparent to container processes.
  7. simply left click and select 'edit' for the container, go down to NAME_SERVERS and set the value to the following:- 84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1 apply the change.
  8. nope, its never been 8119 for the container port, to be clear here you can choose whatever port you want on the host side (as long as its not in use) and that will work just fine, so you can set it to 8119 on the HOST side no problems.
  9. yes, you should not change the container port, you should only ever change the host port, the container port is hard set to 8118 and cannot be changed.
  10. this is expected when RCLONE_DIRECTION=both , as a local to remote copy will happen first, overwriting changes made to the remote file, then a remote to local sync will happen, its all about what you want to achieve here, if you want to only sync changes from remote to local then set the RCLONE_DIRECTION accordingly, same goes for local to remote.
  11. precisely, a copy will never delete files. if you set RCLONE_DIRECTION=both) an RCLONE_OPERATION=sync then a sync will happen (deleting everything on the remote that is not present on the local - syncs from local to remote first, it will then sync the other direction with nothing to do (all files will already be in a synchronized state).
  12. im sorry i have read this 3 times and i dont know what you are trying to achieve here, where is the 'old' file?, local? remote? and where is the 'new' file? local? remote? what is RCLONE_DIRECTION now set to?. EDIT - my advise start small: can you copy a new file from local to remote?, yes or no? can you copy a new file from remote to local? yes or no? can you change a local file and copy to remote and see the change? yes or no? can you change a remote file and copy to local and see the change? yes or no?
  13. A sync operation will have to work in one direction first, it cannot sync simultaneously with local and remote, one of these has to be the source of truth here, for example a scenario:- local file has new 3 files remote cloud has new 5 files if you set RCLONE_OPERATION=sync and RCLONE_DIRECTION=both then what happens?, well it has to sync in one direction first before it can sync in the other direction, so files will be matched from local to remote, all 5 files on remote will be deleted and replaced with the 3 files that are local. it will then sync from remote to local, so what will happen?, the 3 files that now exist on remote match the local, so no action is done, sync is now complete in both directions. in short a sync operation cannot sync 3 files on the local to the remote and also keep the 5 existing files, that is NOT a sync operation, that is a copy operation. It is also not normal to set the RCLONE_OPERATION to sync and the RCLONE_DIRECTION to both, as there is little point in this, as has been described above (in italics). PLEASE be careful with sync operation, it WILL delete everything on the destination, and if you are syncing from remote cloud to local server then that means potentially deleting everything on your unraid server that is not present on the remote cloud.
  14. nope, that is highly unlikely, i would suspect PEBKAC 🙂
  15. i do know what it is and yes its included in my image, its the keditbookmarks application, its a plugin/addon for krusader, i have a butt ton of them included with my image to try and reduce the 'can you just add....' requests :-), fyi this is the list of packages i include in my image:- krusader p7zip unarj xz zip lhasa arj unace ntfs-3g kde-cli-tools kio-extras kdiff3 keditbookmarks kompare konsole krename ktexteditor breeze-icons
  16. just to be clear are you talking about registering magnet links for your web browser or simply copy and pasting magnet links via 'file/add torrent link/<paste magnet into 'Download Torrents from their URLs or Magnet links' field>' (this works every time for me). edit - magnet links sent from prowlarr to qbittorrent also work.
  17. im using http and magnet links with qbittorrent with no issue, what makes you think you need https for magnets?
  18. unlikely most people are not doing what you are trying to achieve, its difficult, and tbh ive never done this. a good start! not so good, you should NOT be adding port 56000 to VPN_INPUT_PORTS or VPN_OUTPUT_PORTS, its a port used to communicate externally only over the vpn tunnel. this is also a bad idea, the incoming port 56000 should not be forwarded on your router, please remove it. that is good!. you will also need to add ALL of the following ports as VPN_INPUT_PORTS:- -p 1900:1900/udp \ -p 3005:3005 \ -p 5353:5353/udp \ -p 8324:8324 \ -p 32410:32410/udp \ -p 32412:32412/udp \ -p 32413:32413/udp \ -p 32414:32414/udp \ -p 32469:32469 with all of the above done it MAY work, good luck.
  19. 😞 i can only assume this is the case then, nothing to do but wait until its working again.
  20. just restarted my container and i dont see the same, ive got the normal listing:- 2021-11-08 16:15:03,371 DEBG 'start-script' stdout output: [info] PIA endpoint 'nl-amsterdam.privacy.network' is in the list of endpoints that support port forwarding 2021-11-08 16:15:03,372 DEBG 'start-script' stdout output: [info] List of PIA endpoints that support port forwarding:- 2021-11-08 16:15:03,372 DEBG 'start-script' stdout output: [info] al.privacy.network 2021-11-08 16:15:03,372 DEBG 'start-script' stdout output: [info] ad.privacy.network [info] austria.privacy.network [info] brussels.privacy.network [info] ba.privacy.network [info] sofia.privacy.network [info] zagreb.privacy.network [info] czech.privacy.network [info] denmark.privacy.network [info] denmark-2.privacy.network [info] ee.privacy.network [info] fi.privacy.network 2021-11-08 16:15:03,372 DEBG 'start-script' stdout output: [info] fi-2.privacy.network [info] france.privacy.network [info] de-berlin.privacy.network [info] de-frankfurt.privacy.network [info] gr.privacy.network [info] hungary.privacy.network [info] is.privacy.network [info] ireland.privacy.network [info] man.privacy.network [info] italy.privacy.network [info] italy-2.privacy.network [info] lv.privacy.network [info] liechtenstein.privacy.network [info] lt.privacy.network [info] lu.privacy.network 2021-11-08 16:15:03,373 DEBG 'start-script' stdout output: [info] mk.privacy.network [info] malta.privacy.network [info] md.privacy.network [info] monaco.privacy.network [info] montenegro.privacy.network [info] nl-amsterdam.privacy.network [info] no.privacy.network [info] poland.privacy.network [info] pt.privacy.network [info] ro.privacy.network [info] rs.privacy.network 2021-11-08 16:15:03,373 DEBG 'start-script' stdout output: [info] sk.privacy.network [info] spain.privacy.network [info] sweden.privacy.network [info] sweden-2.privacy.network [info] swiss.privacy.network [info] ua.privacy.network [info] uk-london.privacy.network [info] uk-southampton.privacy.network [info] uk-manchester.privacy.network [info] uk-2.privacy.network [info] bahamas.privacy.network [info] ca-toronto.privacy.network 2021-11-08 16:15:03,373 DEBG 'start-script' stdout output: [info] ca-montreal.privacy.network [info] ca-vancouver.privacy.network [info] ca-ontario.privacy.network [info] greenland.privacy.network [info] mexico.privacy.network [info] panama.privacy.network [info] ar.privacy.network [info] br.privacy.network [info] venezuela.privacy.network [info] yerevan.privacy.network [info] bangladesh.privacy.network [info] cambodia.privacy.network [info] china.privacy.network 2021-11-08 16:15:03,373 DEBG 'start-script' stdout output: [info] cyprus.privacy.network [info] georgia.privacy.network [info] hk.privacy.network [info] in.privacy.network [info] israel.privacy.network [info] japan.privacy.network [info] japan-2.privacy.network [info] kazakhstan.privacy.network [info] macau.privacy.network [info] mongolia.privacy.network [info] philippines.privacy.network [info] qatar.privacy.network [info] saudiarabia.privacy.network 2021-11-08 16:15:03,373 DEBG 'start-script' stdout output: [info] sg.privacy.network [info] srilanka.privacy.network [info] taiwan.privacy.network [info] tr.privacy.network [info] ae.privacy.network [info] vietnam.privacy.network [info] au-sydney.privacy.network [info] aus-melbourne.privacy.network [info] aus-perth.privacy.network [info] nz.privacy.network 2021-11-08 16:15:03,373 DEBG 'start-script' stdout output: [info] dz.privacy.network [info] egypt.privacy.network [info] morocco.privacy.network [info] nigeria.privacy.network [info] za.privacy.network
  21. i would guess its minidlna crashing, i have done a quick scan of my log as i use this container myself and i see no matches for 'segfault' in my log, so not sure what's causing it but it doesnt look to be normal behaviour for minidlna to crash regularly.
  22. i would do based on how many entries there are in there. simply open the file you placed in /config/openvpn/<file with ovpn extension> with something like notepad++/atom/vscode and remove all but 3 of the 'remote' lines in the file, then save and restart the container.
  23. you got a LOT of remote entries in there!, you sure you need all of them?, keep in mind each hostname will have to be resolved, and each one could contain multiple ip addresses, so expect a long startup time:- remote se-sto-017.mullvad.net 1194 remote se-sto-009.mullvad.net 1194 remote se-sto-010.mullvad.net 1194 remote se-sto-018.mullvad.net 1194 remote se-sto-011.mullvad.net 1194 remote se-sto-006.mullvad.net 1194 remote se-sto-023.mullvad.net 1194 remote se-sto-014.mullvad.net 1194 remote se-sto-020.mullvad.net 1194 remote se-sto-016.mullvad.net 1194 remote se-sto-008.mullvad.net 1194 remote se-sto-007.mullvad.net 1194 remote se-sto-019.mullvad.net 1194 remote se-sto-022.mullvad.net 1194 remote se-sto-012.mullvad.net 1194 remote se-sto-021.mullvad.net 1194 remote se-sto-015.mullvad.net 1194 remote se-sto-013.mullvad.net 1194 my advice is to leave this as a 2 or 3 entries only, then repost your log.
  24. can you paste screenshots of your container configuration, ensure you have 'advanced view' toggled on and expand out 'show more settings'.
×
×
  • Create New...