plantsandbinary

Members
  • Posts

    343
  • Joined

  • Last visited

Everything posted by plantsandbinary

  1. Thanks for replying at such a time. So firstly, happy new years! I read that, so my local router setting is pointless. I'll disable that. My question then. So I am to assume that the rtorrent.rc file setting is completely pointless (at least for me running PIA)? Also, then I guess I am also to assume that I should ignore that "open port" plugin that rutorrent shows being closed? If this is the case, please let me know. Then I'll leave the rtorrent.rc port set to whatever if it makes no difference. I've checked on the canyouseeme.org and yougetsignal.com and they both show the port is open. Download speed is still pretty finicky but that could just be a coincidence. However, both sites show the port is a TCP port. Though my .ovpn config is using UDP only. Why is that?
  2. I just swapped to PIA. Is this necessary to enable port forwarding on my ruTorrent container? Currently I'm just using a generated .ovpn file from their site which I can create from my account page. I really need port forwarding working but the way that it is set up at the moment it seems you're only able to get a random one from within their Windows client... So how do we do this using the container? @binhex EDIT: Ok so I've read your FAQ on Github and the last 10 pages of posts. So far I get this in my log: 2020-12-31 21:52:38,738 DEBG 'start-script' stdout output: [info] Script started to assign incoming port 2020-12-31 21:52:38,738 DEBG 'start-script' stdout output: [info] Port forwarding is enabled I'm using a brand new PIA account. Windows client is set to Wireguard and Request Port Forwarding is set to: YES (on Windows client). PIA password and username are set in container options. STRICT_PORT_FORWARDING=yes I am also using an endpoint that allows port forwarding. I've got an .ovpn file in the openvpn folder that I generated from my PIA account page using GCM cipher etc. as you said it may provide better performance. Here's a copy of that with cert and country.IP etc. removed: client dev tun proto udp remote xxxxxx.privacy.network 1198 resolv-retry infinite nobind persist-key cipher aes-128-gcm ncp-disable tls-client remote-cert-tls server auth-user-pass credentials.conf compress verb 1 <crl-verify> -----BEGIN X509 CRL----- xxxxxxxxxxxxx -----END X509 CRL----- </crl-verify> <ca> -----BEGIN CERTIFICATE----- xxxxxxxxxxxxx -----END CERTIFICATE----- </ca> disable-occ My main question is. How do I tell what port is being forwarded by the container? Also, will this port always be the same? Finally, and maybe most importantly. What do I do with the rtorrent.rc file which states what port I am trying to use? I mean if the forwarded port is chosen randomly or something with your script Binhex, how do I know what port to set the container to use in the rtorrent.rc/ruTorrent Settings -> Connections tab? When I was using Mullvad the port I chose to be opened via Wireguard was permanent, PIA seems to do this totally differently though. Anyway so what port should I set in the rtorrent.rc file and then port forward in my router. I would rather use Wireguard too for better performance but my server isn't exactly weak or anything. So if that's possible, I'd love some info on doing that. Though it seems PIA doesn't have some Wireguard config generator or anything. Thanks EDIT 2: I just saw this in the log: 2020-12-31 22:03:48,536 DEBG 'start-script' stdout output: [info] Successfully assigned and bound incoming port '51083' 2020-12-31 22:03:48,850 DEBG 'watchdog-script' stdout output: [info] rTorrent listening interface IP 0.0.0.0 and VPN provider IP 10.x.xxx.95 different, marking for reconfigure 2020-12-31 22:03:48,857 DEBG 'watchdog-script' stdout output: [info] irssi not running 2020-12-31 22:03:48,864 DEBG 'watchdog-script' stdout output: [info] rTorrent not running 2020-12-31 22:03:48,865 DEBG 'watchdog-script' stdout output: [info] rTorrent incoming port 49160 and VPN incoming port 51083 different, marking for reconfigure So this time it bound 51083 as the incoming port, and last time it seems to have been 48225. In the ruTorrent settings I have: Settings -> Connection -> Port, set to 55180-55180. I don't understand what the script is trying to do, how does it handle the rtorrent.rc port setting and how can I anticipate what port is going to be assigned upon a container restart so I know what to forward in my router so I am not always having to change that in my router settings. Is there at least some logic or port range or something I can anticipate so I don't always have to change things in my router (I have an ASUS AX88U)? Could you please make this a bit more clearer in your Github FAQ.
  3. Update: I was able to delete them after stopping Docker.
  4. I'm trying to use rm -r or rm -rf To delete some bugged folders made by ruTorrent in /incomplete When trying I get this message: rm: cannot remove 'incomplete/Sokpop.S08.Popos.Tower-DARKZER0': Directory not empty rm: cannot remove 'incomplete/Default.Folder.X.5.1.5.Dmg': Directory not empty rm: cannot remove 'incomplete/Micromat_ATOMIC_v1.0.2[MacOS]': Directory not empty rm: cannot remove 'incomplete/Aretha Franklin - Runnin'\'' Out Of Fools (1964) (2011) Columbia, HDTracks 24-96 [FLAC24]': Directory not empty rm: cannot remove 'incomplete/Mr.Kitty - D E Δ T H (2011)': Directory not empty rm: cannot remove 'incomplete/Ishuzoku Reviewers/NC': Directory not empty rm: cannot remove 'incomplete/Akira.1988.UHD.BluRay.2160p.TrueHD.5.1.HEVC.REMUX-FraMeSToR': Directory not empty Which doesn't make sense as I am using a command that should force the deletion of any directories and sub-directories including their contents. I've never had a problem using either of these commands before. How do I remove these folders and files in them? It seems to be an issue with these files: .fuse_hidden003c7f56000064a7 Where each file is called .fuse_hidden<some_hash>
  5. Alright thanks mate, I really appreciate the support. This can be closed then. Looks to be all working now.
  6. I was just about to say. Turning off just ruTorrent makes the mover run immediately. Files copying over at 350MB/s to parity and one of my disks... so it looks like that's working now. The other settings regarding the cache are okay?
  7. That's probably it. So the mover won't move the files if they are seeding? In that case I'll shut down rutorrent before engaging the mover and see if that works. Does everything else in the screenshots above look okay though? appdata cache set to - only and /tank set to cache - yes? What about the rest? I can handle shutting down ruTorrent if the mover fills up and I need to run it manually after having downloaded several large files, but if something else breaks like last time I'm going to spit the dummy.
  8. I ran it 5 more times and it stopped immediately every time. So this one absolutely has to have some explanation in it. I don't know where to look myself. tower-diagnostics-20201121-1258.zip
  9. I hope this has the right information. It should be at the bottom of the log. I rebooted, ran the mover. It ran for a while, moved about 125GB and then stopped halfway. I'm assuming because I didn't turn off Docker and something on one of my docker containers spun up and made it stop. I tried running it again (this time with logging enabled) and it stopped immediately again. I hope this diag tells why. tower-diagnostics-20201121-1255.zip
  10. Mover still hasn't moved anything manually or automatically. It's strangling my server because my cache is totally full. Can someone please tell me why it's not working? EDIT: Seems to be something to do with Docker I think. Restarting and immediately running the mover looks to be okay.
  11. Well I had a stab at this again and fixing my configuration. I literally am starting to question entirely how my device is meant to be setup to avoid this problem happening in the future. I rebuilt my torrents and setup rutorrent again after importing everything. It seems to be okay now but all of this has me wondering if it's set up the right way. The mover is still not moving files to the array when I am trying to run it. Here's the setup Here's the setup pics and diagnostics (attachments) SPECS: Model: HP MicroServer Gen8 M/B: Version - s/n: ?????????? BIOS: HP Version J06. Dated: 04/04/2019 (latest) CPU: Intel® Xeon® CPU E31265L @ 2.40GHz HVM: Enabled IOMMU: Enabled Cache: 128 KiB, 1024 KiB, 8192 KiB Memory: 16 GiB DDR3 Single-bit ECC (max. installable capacity 16 GiB) Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000 Mbps, full duplex, mtu 1500 eth1: 1000 Mbps, full duplex, mtu 1500 Kernel: Linux 4.19.107-Unraid x86_64 OpenSSL: 1.1.1d Uptime: 2 days, 18:07:08 In short-hand. As I understand for best performance I have it set up like this: appdata for docker containers is on the cache drive always. So appdata is set to cache -> only tank (which is where all my files go spread across the array) is set to cache -> yes so new files etc. are put there. This is really just for quick downloading via rutorrent every other share is set to cache -> no If that's wrong, or I am not understanding this right or I've done something wrong, or there is a better way to do this. Someone please tell me so I don't blow up my device in the future. A short explanation of how/what to do would be greatly appreciated. With this setup. The mover refuses to do anything when trying to invoke it manually. It will just move when it feels like it from what I can see. There's no other way for me to describe when/how it actually makes the decision to move files from the cache to the array. Thanks for the help and sorry for losing my head before but it was one hell of a crippling loss... tower-diagnostics-20201120-1754.zip
  12. Well I'm not sure what just happened but I did what I wrote above and the mover finished, with only 30GB left on the cache. I turned Docker back on, started ruTorrent and every single one of my torrents immediately started redownloading and overwrote my completely written files... I had already downloaded all of those files over a 9-12 month period. But it's now downloading over 3.9TB of data all over again... this is a total fucking catastrophe... and now I have to go through each one and figure out what I want to re-download... I cannot even stress what a catastrophe this is. I lost over 1TB of music... half of those torrents don't even exist any more and I was the only seeder. Now that I don't have the full files they aren't downloading. I have no idea where I will get some of those albums again. They took months to find. Maybe the biggest irony. My fucking cache drive is 95% full again in an instant thanks to ruTorrent allocating space for all the files. Why the hell couldn't the mover just move the damn files off the cache before trying to move appdata back on. Honestly, how tf is that not a simple thing to do. I would never have had to change anything if it had just skipped appdata for a moment and moved the stupid movies onto the fucking array first! I am so pissed. I have to turn this PoS off and think about when I feel like I have the patience to sort this mess out. But I bet I'm going to wind up with a dozen warnings on my private trackers complaining that I've done partial downloads and am not seeding back.
  13. I've set appdata to "only". I want the appdata on the cache drive always, I don't want it on my RAID array. So I guess that's the correct setting? Thanks I did this. Though I didn't have domains on the cache drive. I just checked. I had only some files from the array (those new ones downloaded via rutorrent, waiting to be moved to the array) and appdata. I turned off Docker via the settings tab and ran the mover with app data set to "only" and /tank set to "yes". I don't really understand the difference between "yes" and "prefer" though. I have the mover run every hour (or at least I try to get it to) because I have a gigabit line with 100MB/s download and 50MB/s upload. I cycle through a lot of bluray rips up to 80GB in size. So if I am downloading multiple files at once I need the mover to run asap to move files from the cache to the array so my docker containers don't crash until the mover runs and clears space on my cache drive. Setting it to every day wasn't enough because it will fill up instantly and then not move for the rest of the day. As far as I understood the scheduler, the hourly run means only that it will "check" if files need to be moved, but still decide not to if the cache isn't very full. Whereas if I set it to run every 12 hours, it'll check and decide but if it's not full at that time, it won't run. Meaning any time between the next 12/24 hours nothing will happen if the drive fills up quickly. As I understood, there isn't any kind of downside running the mover that often. It looks like it's set to: 2000000. Would be nice to know if that was bytes/kilobytes etc. I'm guessing that's kilobytes. Seeing as my cache drive won't fill past 1.6GB and 2000000 kilobytes is about 2GB in base 10, I guess that's accurate. I'll add an extra 0 just in case and hopefully it will stop my docker containers crashing when the cache gets really full.
  14. It seems to be working now, but I had to restart the server and then press "move" for it to actually do anything. No idea why. I had even turned off Docker before restarting so there really shouldn't have been any activity whatsoever. Either way, it seems to be okay now but I do wish it would just move things off the cache first regardless of whether or not there are tasks to put things on the cache too.
  15. Thanks for the continued assist. Can I leave the appdata share on cache=only, or do I need to move it back later? I thought I had set it to cache only just to make containers perform better anyway. Setting it to no or only doesn't seem to make the mover do anything still. It just stops immediately. /tank is now the only share that is using the cache btw, which is set to "yes". It would be nice if in an update the mover skipped moving items to the cache if it was full and instead focused on moving items off, instead of outright stopping like it is in this case.
  16. I'm trying to move the files from the cache drive to /tank which is comprised of those other 3 drives you see above. It's a share called /tank which the mover should be moving filed downloaded to the cache off to the main drives every hour. For some reason though it only does this when it feels like it (at least that's how it seems) despite my set hourly schedule. I've attached some files. As you can see from the last screenshot, the cache drive has filled up. All of my docker images have now crashed and when trying to invoke the mover manually, it says it's running but a refresh shows it's stopped immediately and is doing nothing. tower-diagnostics-20201117-1755.zip
  17. Just what it says. Cache is pretty full. It seems to run fine after a few hours (can't say it runs on 1 hour schedule like I set it up because it seems to take longer than 1 hour to actually do anything). Maybe I have set it up wrong? Trying to run it manually now. The button presses, says mover is running. After refreshing the page. It shows it's not running.
  18. Can we please get an option to auto-update docker apps after a few days (like say 7) like we do for plugins please? Was surprised to not find that option.
  19. My entire rTorrent has stopped working for some reason whilst downloading a large 1.6TB torrent. My first drive filled up and most of my docker images crash when that happens. So I turned off all containers and moved files to the next drive. Figured there would be no problems so I restarted the container. Now I get this in the log. 2020-10-30 06:23:05,894 DEBG 'rutorrent-script' stderr output: [NOTICE] [pool www] 'user' directive is ignored when FPM is not running as root [NOTICE] [pool www] 'group' directive is ignored when FPM is not running as root 2020-10-30 06:23:05,903 DEBG 'rutorrent-script' stdout output: [info] starting nginx... 2020-10-30 06:23:16,542 DEBG 'watchdog-script' stdout output: [warn] Wait for rTorrent process to start aborted, too many retries 2020-10-30 06:23:16,543 DEBG 'watchdog-script' stdout output: [warn] Failed to start rTorrent, skipping initialisation of ruTorrent Plugins... 2020-10-30 06:33:23,916 DEBG 'watchdog-script' stdout output: [info] rTorrent listening interface IP 0.0.0.0 and VPN provider IP 10.14.0.3 different, marking for reconfigure 2020-10-30 06:33:24,772 DEBG 'watchdog-script' stdout output: 0 2020-10-30 06:33:25,099 DEBG 'watchdog-script' stderr output: INFO: Bad data packets written to '/tmp/xmlrpc2scgi-99.xml' 2020-10-30 06:33:25,100 DEBG 'watchdog-script' stdout output: ERROR While calling network.local_address.set('', '216.239.32.10\n216.239.34.10\n216.239.36.10\n216.239.38.10'): <Fault -503: 'Could not set local address: Name or service not known.'> I've even rebooted my server. It was running for 415 days without issues and I must have updated this container maybe 3 or 4 times during that time because I like to update only after a good period of time.
  20. Just ran this. It doesn't work. 4 hours wasted and it didn't move a single file. God knows what it did except create a lot of read/writes. My first drive is still completely full though...
  21. Yes thanks it seems to work fine and persist across reboots. Thanks a ton for the super fast reply btw. If something breaks in the future I'll go back to the default resolution but for now it seems very stable.
  22. I think I managed to figure it out. Just rebooting the container once after installing it fixed it. I also briefly changed from 1920x1080 back to 1280x720 and then back to 1920x1080 because on my huge monitor 720p looks horrid.
  23. Nevermind, rebooting the container once after installing fixed my issue for some reason.
  24. Why am I now getting "Not Available" under Version: when looking at the docker container among the other containers I use? Has this project been deleted or something?