Jump to content


  • Content Count

  • Joined

  • Last visited

Posts posted by Jeffarese

  1. 10 minutes ago, binhex said:

    correct, you dont need to forward ports but you do still need a port to connect to rutorrent web ui, and this will have to be unique for each container, change only the host side NOT the container port.

    Ah yes! I already had this setup in the past, I have that sorted out :) Thanks!

  2. lol sorry, my mind was in another place. Yeah, I meant of your image :P
    Each one of the containers are using different PIA servers (endpoints) so each one of them receive a different IP & request a different port, even though all the containers are running in the same server.


    How would I have host port conflicts? I don't need to forward host ports since the ports are open by the VPN, right?

  3. 13 minutes ago, binhex said:

    yes, leave the container running and it will sort it all out for you, i just had my port close (vpn provider issue) and my scripts kicked in to reconfigure for a new incoming port automagically.

    Correct! it sorted it out after a while and now it shows green!


    Unrelated question about PIA:

    Is there any problem if I have multiple instances of your script using different server configs (one container has sweden.ovpn, another one has france.ovpn) so each one of the containers requests a different port?
    I run multiple containers of your image to organize better my files.
    It seems to be working OK and all of them show the green icon now and seem to be seeding ok.

  4. Is the auto port assign of PIA working correctly?


    I see the messages in the logs about retrieving and assigning the port from PIA:


    [info] Successfully assigned incoming port 55730
    [info] Checking we can resolve name 'www.google.com' to address...
    [info] DNS operational, we can resolve name 'www.google.com' to address [REDACTED]
    [info] Attempting to get external IP using Name Server 'ns1.google.com'...
    [info] Successfully retrieved external IP address [REDACTED]
    [info] rTorrent listening interface IP and VPN provider IP [REDACTED]different, marking for reconfigure
    [info] rTorrent not running
    [info] rTorrent incoming port 49160 and VPN incoming port 55730 different, marking for reconfigure

    However, the port reported in the interface bottom bar is a different one (the one I had previosuly configured) and it shows the red icon saying the port is not open.

  5. 8 hours ago, ljm42 said:

    It is possible in theory but we haven't figured it out yet. This is the thread you are looking for:


    I had already seen that post, but there's still no info about how to do what I need :(

  6. On 12/6/2019 at 9:52 PM, Dataone said:

    At least by default I assume so, yes. My containers using bridged all go through the vpn and all containers using br0 use my home network.


    I'm sure you can set some iptable/routing rules to modify this if you liked though

    In my setup, containers using custom networks still go through the vpn 😕

  7. On 10/16/2019 at 6:57 AM, ljm42 said:

    In the future it may be possible to restrict it so that only specific Dockers use the VPN tunnel.  Until then, you may need to disable the tunnel in order to check for plugin updates or perform other Unraid administrative tasks.

    Any aproximation to when is this going to be possible aprox? This would be the killer feature, since routing ALL the traffic seems like a little bit too much.



  8. Hey, one question.


    Given that I'm running in a more or less powerfull machine (R9 3900x, 64GB RAM), with plenty of resources, is there any settings/tweaks that can be done to increase the performance further? 


    I'm currently sitting at 1700 torrents and sometimes I get timeouts, it seems that nginx can't handle it that well (even if rTorrent itself is a beast)

  9. 1 hour ago, binhex said:

    probably cos you exec'd in as user root not user 'nobody', im assuming you have enabled irssi right?.

    Ah yeah, that seems to be the problem. 

    In order to access the container I do


    docker exec -it binhex-rtorrentvpn /bin/bash


    That logs me in as root.


    How is the normal procedure to access it as nobody?




    docker exec -it --user nobody binhex-rtorrentvpn /bin/bash?


  10. Jackett was auto updated and a segfault appeared in logs:


    Oct 24 02:07:54 Orthanc Docker Auto Update: Stopping jackett
    Oct 24 02:07:54 Orthanc kernel: jackett[15933]: segfault at 10 ip 000015497d9b8fa0 sp 00007ffcf94158d8 error 4 in libpthread-2.27.so[15497d9af000+1a000]
    Oct 24 02:07:54 Orthanc kernel: Code: 07 00 00 00 48 89 df b8 ca 00 00 00 0f 05 64 48 c7 04 25 f0 02 00 00 00 00 00 00 b8 83 00 00 00 e9 a8 fb ff ff 0f 1f 44 00 00 <8b> 47 10 89 c2 81 e2 7f 01 00 00 90 83 e0 7c 0f 85 9b 00 00 00 48
    Oct 24 02:07:57 Orthanc kernel: veth13798ed: renamed from eth0
    Oct 24 02:07:57 Orthanc kernel: br-d45ab5905980: port 10(veth98599ed) entered disabled state
    Oct 24 02:07:57 Orthanc kernel: br-d45ab5905980: port 10(veth98599ed) entered disabled state
    Oct 24 02:07:57 Orthanc kernel: device veth98599ed left promiscuous mode
    Oct 24 02:07:57 Orthanc kernel: br-d45ab5905980: port 10(veth98599ed) entered disabled state
    Oct 24 02:07:57 Orthanc Docker Auto Update: Stopping lidarr
    Oct 24 02:08:01 Orthanc kernel: veth3683ba0: renamed from eth0
    Oct 24 02:08:01 Orthanc kernel: br-d45ab5905980: port 5(vethd40c869) entered disabled state


    Everything works fine (as far as I can tell), but I'm worried about seeing a segfault on the logs. Anybody knows what could be happening?



    Attached diagnostics


  11. What MariaDB container?


    The problem is that there are two shares.


    One where you store the config, which should be on cache.


    The other where you store the data, which obviously is in the array.


    This was created in a way that the owncloud.db is on the share that you need to store in the array.

  12. Hello,

    I'm upgrading a server I have at home and I'm looking at Epyc & Threadripper CPUs.

    My needs are:

    1 VM as main development server which also acts as CI/CD running tests

    Multiple VMs for testing

    VPN & firewall

    Media center (automation with Plex, Sonarr, Radarr...etc)

    I currently have aprox 80TB and I'm planning on keep increasing this storage a lot, so I need plenty of room for storage expansion.

    I also need to have some NVMe drives in RAID 0 / 10 for some very intensive IO tasks.

    I do NOT need GPU power.

    Looking at the new Epyc 7002 series they actually seem cheaper than current Threadrippers, the only thing it's holding me back in that regard is that at least here in Europe Epyc motherboards are pretty hard to get get.

    Also, is there any reason to chose Threadripper over Epyc in my use case? high frequency is not needed for my use case at all, so I think it would be a waste of power.

    In any case, which boards would you recommend? Anybody have similar setups?


  13. Hi.

    I'm trying to replicate a ratio building setup in my Unraid but I have some doubts.


    Right now I have 2 separate dockers, one with Deluge and the other with rTorrent with ruTorrent


    Deluge would be the racing client and ruTorrent the long term one.


    Right now I have ruTorrent with autodl-irssi with some rules autodownloading .torrents files to a deluge watchfolder.

    I would like to do 2 things:


    • I would like to directly push the torrents to Deluge to avoid delays and then check the announcer status with a script. I have a problem here: from the rTorrent docker I don't have deluge-console, so I don't know how to handle that.
    • I would like to move the finished torrents from Deluge to rTorrent for long term seeding after they are completed. The problems is similar to the one I wrote before: the two docker containers don't have the CLI commands of the other client available.

    I was thinking of creating an Ubuntu VM to Install both, but I think being inside a VM would have performance impact maybe.

    Does anybody here have a similar setup and can offer some help?

  14. Hey.


    So this week my system went totally unresponsive (no gui, no ssh, no shares... etc) 2 times this week and I can't understand why.


    My system is using brand new Gigabyte Aorus B450 Pro with R5 2600 if that helps.


    I had a previous uptime of 15 days. I'm using an UPS also.


    This sucks because I had to hard restart my system, so my data can be damaged.


    Any help?


  15. Hi.


    I have my config in /appdata/nextcloud, which is in cache and my data on a different share, which is in the drive array (/nextclouddata).


    I'm having a problem that Nextcloud is periodically writing in `/nextclouddata/nextcloud.db`, which spins up my drives and parity.


    Shouldn't that config file be in /appdata/nextcloud, with all the config related files?