Jump to content

Cull2ArcaHeresy

Members
  • Content Count

    55
  • Joined

  • Last visited

Community Reputation

3 Neutral

About Cull2ArcaHeresy

  • Rank
    Advanced Member
  1. When i move a completed torrent from my seedbox to local (move files then add .torrent), rtorrent checks before seeding. Great to make sure nothing got corrupted during transfer and all...but it takes forever. Earlier it was checking a 3gb one when i looked (was at like 20 or 40%), now 6 hours later it is at 75% on that same torrent. After it finishes check i move it to correct done folder, it adds it back to queue to be checked, but this check is faster and seems to be on par with force recheck (maybe slower but nowhere near hours per gig). Is this a container thing like i need to give it more resources (top below), or some rtorrent setting...which it seems most the checking hash options have been deprecated. From what have found, there has never been an option for it to check multiple at once, but the biggest (deprecated) flag was hash_check_interval (others too, but smaller impacts). top - 02:40:28 up 33 days, 4:33, 0 users, load average: 15.73, 16.31, 16.44 Tasks: 76 total, 1 running, 75 sleeping, 0 stopped, 0 zombie %Cpu(s): 26.3 us, 1.8 sy, 0.0 ni, 71.4 id, 0.5 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 128952.8 total, 4412.8 free, 46173.8 used, 78366.2 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 79801.2 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 37385 nobody 20 0 3472912 1.1g 140908 S 0.3 0.9 113:22.49 rtorrent main 1 root 20 0 2340 284 204 S 0.0 0.0 0:59.15 tini 6 root 20 0 30816 17412 1612 S 0.0 0.0 9:41.05 supervisord 368 nobody 20 0 7436 2372 1740 S 0.0 0.0 0:55.70 logrotate.sh 370 nobody 20 0 7436 672 4 S 0.0 0.0 0:00.35 rutorrent.sh 371 root 20 0 7728 2720 1780 S 0.0 0.0 0:00.47 start.sh 372 nobody 20 0 7664 2772 1968 S 0.0 0.0 6:35.67 watchdog.sh 1039 nobody 20 0 80496 16500 5948 S 0.0 0.0 0:31.58 php-fpm 1040 nobody 20 0 79060 17432 8852 S 0.0 0.0 0:31.74 php-fpm 1041 nobody 20 0 77012 15364 8780 S 0.0 0.0 0:32.12 php-fpm 4214 root 20 0 7728 2120 1280 S 0.0 0.0 0:37.05 start.sh 5212 nobody 20 0 76068 7852 1556 S 0.0 0.0 1:37.87 php-fpm 5216 nobody 20 0 10348 1592 260 S 0.0 0.0 0:00.05 nginx 5217 nobody 20 0 11156 3400 1292 S 0.0 0.0 5:33.05 nginx ...bunch more nginx update: 3 hours later that torrent finished checking ("posted 3 hours ago"...so might be closer to 4). Checking 100% took like 40 minutes to change to seeding.
  2. what next-gen servers are yall having luck with?
  3. yea, just best wrong premises/logical conclusion could figure out (edited post should be more clear)
  4. no luck finding where saw, best guess is misreading/misremembering [the following wrong premises/logical conclusion] 1) nextgen uses (only) wireguard, 2) wireguard prone to hacks (instead of requiring hacks to work), thus 3) nextgen prone to hacks
  5. wasnt there a major vulnerability in something that the nextgen were implementing?
  6. but sadly "Connect to OpenVPN: We are still working on this script.", which makes me assume the back end aint ready yet either
  7. guessing it has to do with max speed tweaks, which i will look into after getting constant consistent connections and activity EDIT: now that i'm finally creating a combined file, the default ones i have start with the following, so maybe its always been udp by default (thought defaults were tcp) client dev tun proto udp remote LOCATION.privateinternetaccess.com 1198
  8. was that built off the default pia files? looking at mine, the gcm and auth are different (quickly compared so probably more differences).
  9. Changing of pia vpn files to get to different endpoints while whatever is going on, what is the best way to measure "quality" for each? There are different tools to measure container stats but also connecting directly to rtorrent could give data. I have enough seeding torrents that that should be constantly at limit (when connection is good), fewer downloads but that would be easy to change. Trying to figure out the best way to keep track of upload and download for each vpn file. Being early on this would be raw totals for each day and later being more of a line graph probably. Raw scaled by how many hours that vpn file was used that day (other options too like hourly average, but that is a later thing). So say your connection is bad right now, you stop the container and run this script that would give you a list of vpn files and their daily quality history (from your runs) and ask you to select which one you want to copy in for the container to use. Then you start the container again and it will connect using the chosen file. While container is running this script is logging the network usage.
  10. compared to a "graceful stop", would this cause any issues like rtorrent corrupting a file or leakage (or other)? Also not a hard threshold, but a count that resets each hour might be much better so a series of reconnects over time wouldn't cause a restart.
  11. i had assumed that it more or less did something like 1-[ established vpn conn ], 2-[ locked down connections (including dns) ], 3-[ started rtorrent/rutorrent ], 4-[ monitored connection and if need be kill *torrent and go back to #1 ] knowing the dns issue (and yea docker env variables are a pain to change), it is a much more closed locked down system Can a container restart itself? Rough idea that is it counts number of fails (connection or pia forward), and at some arbitrary threshold it restarts (or kill itself with the restart flag in docker command)? Could always have it create a file when that threshold is met and a script running on unraid sees the file, deletes it and restarts the container...but idk if it is a big enough problem to do that. just to be clear, all pia with same creds or all other with same creds. Im not against mixing them, but being the same for simplicity.
  12. this method requires restarting the container, any reason to not have the line where an openvpn connection is established to pick a random *.ovpn file? Been meaning to try it, just havent yet. This way whenever it resets the connection it will try a new one (or pick the same again as random).
  13. Only officially supported one left? I deleted my post since you had a factually correct answer, just didnt pop up until after i had made my comment.