RallyAK

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by RallyAK

  1. Another vote for... ratiocolor tracklabels Would really like to see filemanager implemented at some point as well. I tried installing it myself but gave up after a day or two of messing with it, too many moving pieces... https://github.com/nelu/rutorrent-thirdparty-plugins/tree/stable/filemanager
  2. Thanks for the feedback, to save power I'd rather not seed from the array. I'd like to go the UD route but I'm having trouble mapping a new path to one of the 2TB drives. In docker settings I've created a 2nd host container path called "/data2" mapped to a 2TB UD drive but after restart ruTorrent does not recognize the new path. Is it even possible to map this docker to multiple /data paths or drives (e.g. cache + UD)? Another option is running DelugeVPN and mapping it to a UD drive but I prefer to stick with rTorrent since that's what I'm familiar with and migrating hundreds of active torrents from rTorrent to Deluge appears to be a tedious process.
  3. I'm almost out of space on my cache pool (2x4TB in raid1), 95% of which is being used for longterm seeding. I have 2x2TB drives at my disposal and I'd like to use them to expand capacity for rTorrent but not sure how to go about that. From what I've read in the FAQ's it doesn't sound like it's possible to expand the cache pool with smaller drives, so wondering if it's possible to add the 2TB's to my current array, create additional /data path's to each, and use them for rTorrent exclusively?
  4. Local WebUI access broke again after updating to 6.3.5 and rebooting. Ran the iptable_mangle command and everything's back up again. Not a big deal but wanted to report back here
  5. I was having trouble accessing the webui inside my LAN (10.x.x.x) and outside my LAN, on all ports. The only way I could access the webui was by disabling VPN. Since running the '/sbin/modprobe iptable_mangle' command I haven't had any access issues, inside or outside of my network.
  6. Thanks binhex! I ran '/sbin/modprobe iptable_mangle' restarted the docker and I'm able to access the WebUI again. Hallelujah!!! \o/ I saw this command reading through your FAQ but didn't think it applied because I was having issues locally, not outside my LAN as specified in the Q&A. Apologies about the comment to hold off updating unRAID, that was merely a warning to others that updating may have been the culprit. Having ruTorrent running for a week, with hundreds of torrents seeding and no visibility is a little stressful. Thanks again for your help, I appreciate everything you do and hope the beer $$$ I sent a few weeks back went to good use.
  7. That's what I thought, thanks for taking a look! Do you have any thoughts on why I can't access the WebUI when connected to VPN? I've confirmed that nothing on my network is using ports 9080, 9443, 3000, 5000 or 8118 so there shouldn't be any conflicts. Should I try changing the default http and https ports to something else like 9081 and 9444? Of the 5-6 dockers I'm running this is the only one that broke after updating to 6.3.4. I'm able to access the WebUI's on all others via my host ip and default ports. If anyone else is considering upgrading 6.3.4, you might want to hold off until this is resolved. (See below...)
  8. Hi @binhex I switched to an AirVPN account to see if that made a difference with my access issue, but unfortunately it didn't. Here's a new log with DEBUG turned on. Take a look when you have a second and let me know what you think. On a side note, have you considered supporting mullvad's VPN service on future builds? I setup a trial account with them today and tried logging in but the docker froze at the following line in their ovpn. I tried removing it but that didn't make a difference. # Daemonize service mullvadopenvpn supervisord-2.txt mullvad_windows.conf.ovpn
  9. I'm having a similar issue, after one of the updates a couple weeks ago my listening port went down and I started having peering issues with a number of users. I've tried changing ports on the docker, config and my router but can't get the port to stay up. Not sure what to try next, have you had any luck resolving the issue on your end?
  10. After upgrading to the latest unRAID OS (6.3.4) I'm no longer able to login to the ruTorrent web gui. According to the log, the docker appears to be starting fine but I'm getting time out errors when logging in to... http://<host ip>:9080/ https://<host ip>:9443/ Flood and Privoxy access is also broken. http://<host ip>:3000/ http://<host ip>:8118 I've restarted the docker and server several times, and have tried using different VPN servers from my primary provider and AirVPN, but I'm still having the same issue. Here's a portion of my log, IP's and credentials have been edited, let me know if you need anything else to help troubleshoot. EDIT: I'm able to access the gui if I shut of VPN but prefer not to run that way for very long, for obvious reasons. : | Anyone else having the same issue? supervisord.txt
  11. Hi binhex, great work on this docker! It took a couple days of fiddling, and some minor hair pulling, but it's working great! Couple questions, would it be possible for you to include the filemanager and autodl-irssi plugins on your next commit?...How much beer are we talking? I've tried installing them myself but haven't had any luck. The install process a bit more involved than copying plugins to the folder and expecting them to work, which has surprisingly worked on a couple others. The most important for me is filemanager. I've tried running ArtyumX's install script (https://github.com/ArtyumX/Filemanager-install-script-for-ruTorrent) but when I get to the last step and run ./filemanager.sh, I get the following... Please type your ruTorrent path folder: /mnt/user/appdata/binhex-rtorrentvpn/rutorrent ./filemanager.sh.1: line 26: apt-get: command not found --2017-04-10 21:15:08-- http://www.rarlab.com/rar/rarlinux-x64-5.4.0.tar.gz Resolving www.rarlab.com (www.rarlab.com)... 5.135.104.98 Connecting to www.rarlab.com (www.rarlab.com)|5.135.104.98|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1156900 (1.1M) [application/x-gzip] Saving to: ‘rarlinux-x64.tar.gz’ rarlinux-x64.tar.gz 100%[================================================================================================================>] 1.10M 408KB/s in 2.8s 2017-04-10 21:15:11 (408 KB/s) - ‘rarlinux-x64.tar.gz’ saved [1156900/1156900] rar/ rar/order.htm rar/acknow.txt rar/readme.txt rar/rar_static rar/default.sfx rar/license.txt rar/rarfiles.lst rar/whatsnew.txt rar/makefile rar/rar rar/unrar rar/rar.txt 'rar/rar_static' -> '/usr/local/bin/rar' svn: error while loading shared libraries: libaprutil-1.so.0: cannot open shared object file: No such file or directory chown: invalid user: ‘www-data:www-data’ I know this isn't your script but wondering if you might know what the issue is and why it's hanging on install? Here's the full filemanager.sh for reference... #!/bin/bash # Link: https://github.com/ArtyumX/Filemanager-install-script-for-ruTorrent # -------------------------------------------------------------------------------- # "THE BEER-WARE LICENSE" (Revision 42): # * <[email protected]> wrote this file. As long as you retain this notice you # * can do whatever you want with this stuff. If we meet some day, and you think # * this stuff is worth it, you can buy me a beer in return Poul-Henning Kamp # -------------------------------------------------------------------------------- clear # Checking if user is root if [ "$(id -u)" != "0" ]; then echo echo "Sorry, this script must be run as root." 1>&2 echo exit 1 fi # Asking for the ruTorrent path folder read -p "Please type your ruTorrent path folder: " -e -i /var/www/rutorrent rutorrent_path # Installing dependencies apt-get install subversion zip cd /tmp if [ `getconf LONG_BIT` = "64" ] then wget -O rarlinux-x64.tar.gz http://www.rarlab.com/rar/rarlinux-x64-5.4.0.tar.gz tar -xzvf rarlinux-x64.tar.gz rm rarlinux-x64.tar.gz else wget -O rarlinux.tar.gz http://www.rarlab.com/rar/rarlinux-5.4.0.tar.gz tar -xzvf rarlinux.tar.gz rm rarlinux.tar.gz fi mv -v rar/rar_static /usr/local/bin/rar chmod 755 /usr/local/bin/rar # Installing and configuring filemanager plugin cd $rutorrent_path/plugins/ svn co https://github.com/nelu/rutorrent-thirdparty-plugins/trunk/filemanager cat > $rutorrent_path/plugins/filemanager/conf.php << EOF <?php \$fm['tempdir'] = '/tmp'; // path were to store temporary data ; must be writable \$fm['mkdperm'] = 755; // default permission to set to new created directories // set with fullpath to binary or leave empty \$pathToExternals['rar'] = '$(which rar)'; \$pathToExternals['zip'] = '$(which zip)'; \$pathToExternals['unzip'] = '$(which unzip)'; \$pathToExternals['tar'] = '$(which tar)'; // archive mangling, see archiver man page before editing \$fm['archive']['types'] = array('rar', 'zip', 'tar', 'gzip', 'bzip2'); \$fm['archive']['compress'][0] = range(0, 5); \$fm['archive']['compress'][1] = array('-0', '-1', '-9'); \$fm['archive']['compress'][2] = \$fm['archive']['compress'][3] = \$fm['archive']['compress'][4] = array(0); ?> EOF # Permissions for filemanager chown -R www-data:www-data $rutorrent_path/plugins/filemanager chmod -R 775 $rutorrent_path/plugins/filemanager/scripts # End of the script clear echo echo echo -e "\033[0;32;148mInstallation done.\033[39m"
  12. deleted, meant to posted in the rtorrent thread
  13. I'm doing a Norco 4224 build in a few days with (3) 9210-8i's and (12) 4TB drives to start. Does it matter how the drives are balanced across the controllers since I'm not using all 24 ports to begin with? This will be on a Asus Z9PA-D8 mb using PCI-E Gen3 slot's 3, 4 and 5 (x16, x8, x16). To aid with cooling, I might install drives 1-4 of the bottom row of the case, skip a row then drives 5-8, skip another row then drives 9-12. This would put 4 drives on each controller.
  14. That hadn't occurred to me, I didn't know it was possible tbh. I'm running an i7, so good to know it's a possible to run unraid if I decide to keep it. Thanks! Yep, I get that... So, now that I have a plan to transfer everything over. Next questions... Does it matter how the initial 10-12 drives are balanced across the 9210-8i controllers? 3-4 drives on each or 5-6 on 2, leaving the third controller out as a spare for now? Also, does it matter how the drives are dispersed across the front of the server, from a heat/ventilation standpoint? Bottom to top, left to right, skip a row? How about using one of the 4TB WD Black's as a cache drive? Are there disadvantages using such a large drive for caching?
  15. Nope, no back up solution for the Qnap. I know how bad that sounds but nothing critical is on there, just media that can all be replaced if something bad happened. All my critical data is on the WD Black's which are mirrored. Thanks for the suggestion on swapping the 4TB's for 6TB's on the Qnap, that's a great idea however I plan to sell the Qnap after everything is transferred over to offset some of the cost of the new server, and 6TB HGST are still too expensive IMO. After I wrote the OP, I thought of one more option which I think I'll go with. Purchase (4) 4TB HGST's and (1) 8TB Seagate external. Build the server using the new 4TB's, without parity to start, transfer as much as possible over to unraid and put the rest on the 8TB. That should cover everything I need to get off the Qnap, so I can move the (6) 4TB's over. Once all (10) HGST's are in unraid, I'll transfer my critical data on the WD Black over and either use the Blacks as parity drives, or keep one as a spare and the other as a VM/Docker drive. I might hold on to the 8tb and use it as cold storage for critical data, or return it if I'm not happy with it.
  16. Hiya folks, I've recently hit the wall on my current 6 bay NAS and have decided an unraid server is the best path forward. My components are still about a week out but I'm looking for a couple suggestions on where to start and how to best to transfer things over. Current Config: Qnap TVS-671 (6) HGST 4TB NAS drives (0S03664) in RAID5 Using 17.6 out of 18TB (usable)... only 400GB left, Yikes! I also have (2) 4TB WD Blacks, that are mirrored in a separate docking station and full, and (1 or 2) 2TB's WD Greens that I may or may not use in the new server. New Build: Norco 4224 Asus Z9PA-D8 Seasonic 1050XP3 (1) E5-2670 (4) 8GB ECC RAM (3) LSI 9210-8i My plan is to order (3) shuckable 8TB drives, transfer everything on my NAS over to the new drives and build the new server using the (6) HGST's plus (1) of the WD Blacks. Once the server is up and running transfer everything on the 8TB's and other 4TB Black over, and either shuck the 8TB's and add them to the new array or return them in exchange for 3 to 4 more HGST 4TB's. I've thought about building the server around the 8TB drives but I'm a little leery about using Seagate's. Seagate's are the only drives to have failed on me in the past so I'm not too comfortable with having them in my server permanently. Also I'm not sure if I like the idea of having 3 slower 8TB's in my server and how that will affect the performance and parity checks of the 4TB array. I'm looking for recommendations on which route to take... Build the server using the 8TB's and add the 4TB's to the new array, after all my data is transferred over? ...or use the 8TB's as temporary storage so I can build the server using my current 4TB's, and continue using 4TB HGST's for expansion? Another thought would be to buy (4) new HGST's, instead of the 8TB's, and build the server around those. However, I'm not sure if I'll have enough drive space available to migrate everything off my NAS, so I can move my current HGST's over. Thanks for any suggestions, I'm really looking forward to this build and getting out of the corner I've built myself in to.