Arcaeus

Members
  • Posts

    111
  • Joined

  • Last visited

Everything posted by Arcaeus

  1. Hi there, I'm currently running the rclone plugin (not docker container) to sync to my Google drive. I set it up last week (via SpaceInvaderOne's video) and it worked great for 2TB worth of data. I could see and access the cloud drives in Krusader (under disks) with no issues. I restarted my server to install an Nvidia driver update and when the server came back up, the cloud drives would not mount. I tried to manually run the script to mount the cloud drives based on the help output that it specified (rclone mount remote:path /path/to/mountpoint [flags]). "encrypt" is just the encryption for Google Drive, both remotes are going to the same account, just different folders. Here is the script from user scripts that I ran: #!/bin/bash #---------------------------------------------------------------------------- # first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access # there are 4 entries below as in the video I had 4 remotes amazon, dropbox, google and secure # you only need as many as what you need to mount for dockers or a network share mkdir -p /mnt/disks/Google mkdir -p /mnt/disks/encrypt # This section mounts the various cloud storage into the folders that were created above. rclone mount Google: /mnt/disks/Google --allow-non-empty --max-read-ahead 1024k --allow-other rclone mount encrypt: /mnt/disks/encrypt --allow-non-empty --max-read-ahead 1024k --allow-other When trying to run the script right now (9/23 @ 12:16pm), the screen just sits there and nothing happens. Checking the logs, it just says Script Starting Sep 20, 2021 17:39.18 Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone Mount Script/log.txt Script Finished Sep 20, 2021 17:39.18 Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone Mount Script/log.txt I say all of that just to give background on where I'm at. I would like to set up binhex's docker container running rclone so that I don't have to run the commands directly off the server. I will be doing some large file transfers that may take a few days with my internet upload speed, so would like to be able to shut off / restart the remote computer that I access the webGUI of the server with, and still be able to see the progress of the upload. I've read that I could use a "screen" command, but if there is a way to do it through this docker container it seems like that would be an easier and more straightforward option. I would like it run in a sync situation, so that it adds missing files to the cloud storage, and deletes cloud files if they are not on the local server (not just blind copy everything each time it syncs). I also would like the ability for Plex to be able to pull from the cloud repository should I need to. So. Now that I have the remotes set up, it did work at one point, and i already have the Google Drive API keys and such set up, where do I start on getting the drives to mount and setting up the docker container instead of just the command line plugin? If I need to use the plugin for this, that's fine, just trying to figure out the best way to set everything up.
  2. Hello all, posting here if anyone else runs into this issue. Saw there was an update to Unraid 6.9.1 this morning. Update installed with no issue and restarted my server. On reboot, my Plex container was not started (should be auto-start). When I went to start it, it just gave me the "execution error: bad parameter" error. I tried editing the docker settings, and when I went back to the Docker tab, Plex was gone. This was odd as it was working perfectly last night. I tried to add the container again from my previous image but kept getting this error: I searched for "nvidia driver" and installed this one as it seemed like the previous "Unraid Nvidia" plugin that I had downloaded before wasn't there anymore (see screenshot). Once that finished installing, I re-added the container from my previous image and it fired up no problem. Hope this helps.
  3. Hello All, Set up the Pi-hole docker per the video, but just receiving a "Site cannot connect" message when attempting to open the webGUI. It opened the webGUI very breifly, but when i clicked login, it just gave me the "cannot connect" message again, and has sat there since. Attempted to restart the container a few times but to no avail. I was trying to set my time zone (eastern time) but did not see the exact wording to use when setting up the container. I put in "Eastern Standard Time" and when it was updating the container I saw "America/New York" for tiem zone so I'm assuming it grabbed the right thing. Here are the logs: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] 01-resolver-resolv: applying... [fix-attrs.d] 01-resolver-resolv: exited 0. [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 20-start.sh: executing... ::: Starting docker specific checks & setup for docker pihole/pihole [i] Installing configs from /etc/.pihole... [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone! chown: cannot access '': No such file or directory chmod: cannot access '': No such file or directory chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory Converting DNS1 to PIHOLE_DNS_ Converting DNS2 to PIHOLE_DNS_ Setting DNS servers based on PIHOLE_DNS_ variable ::: Pre existing WEBPASSWORD found DNSMasq binding to custom interface: br0 Added ENV to php: "PHP_ERROR_LOG" => "/var/log/lighttpd/error.log", "ServerIP" => "192.168.2.88", "VIRTUAL_HOST" => "192.168.2.88", Using IPv4 ::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early)) https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts ::: Testing pihole-FTL DNS: FTL started! ::: Testing lighttpd config: Syntax OK ::: All config checks passed, cleared for startup ... ::: Enabling Query Logging [i] Enabling logging... ::: Docker start setup complete [i] Neutrino emissions detected... [i] Using libz compression [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts [i] Received 60887 domains [i] Number of gravity domains: 60887 (60887 unique domains) [i] Number of exact blacklisted domains: 0 [i] Number of regex blacklist filters: 0 [i] Number of exact whitelisted domains: 0 [i] Number of regex whitelist filters: 0 [✓] DNS service is listening [✓] UDP (IPv4) [✓] TCP (IPv4) [✓] UDP (IPv6) [✓] TCP (IPv6) [✓] Pi-hole blocking is enabled Pi-hole version is v5.2.4 (Latest: v5.2.4) AdminLTE version is v5.3.2 (Latest: v5.3.2) FTL version is v5.6 (Latest: v5.6) [cont-init.d] 20-start.sh: exited 0. [cont-init.d] done. [services.d] starting services Starting crond Starting lighttpd Starting pihole-FTL (no-daemon) as root [services.d] done. Stopping cron Stopping lighttpd Stopping pihole-FTL [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] 01-resolver-resolv: applying... [fix-attrs.d] 01-resolver-resolv: exited 0. [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 20-start.sh: executing... ::: Starting docker specific checks & setup for docker pihole/pihole [i] Installing configs from /etc/.pihole... [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone! chown: cannot access '': No such file or directory chmod: cannot access '': No such file or directory chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory Converting DNS1 to PIHOLE_DNS_ Converting DNS2 to PIHOLE_DNS_ Setting DNS servers based on PIHOLE_DNS_ variable ::: Pre existing WEBPASSWORD found DNSMasq binding to custom interface: br0 Added ENV to php: "PHP_ERROR_LOG" => "/var/log/lighttpd/error.log", "ServerIP" => "192.168.2.88", "VIRTUAL_HOST" => "192.168.2.88", Using IPv4 ::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early)) https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts ::: Testing pihole-FTL DNS: FTL started! ::: Testing lighttpd config: Syntax OK ::: All config checks passed, cleared for startup ... ::: Enabling Query Logging [i] Enabling logging... ::: Docker start setup complete [i] Neutrino emissions detected... [i] Using libz compression [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts [i] Received 60887 domains [i] Number of gravity domains: 60887 (60887 unique domains) [i] Number of exact blacklisted domains: 0 [i] Number of regex blacklist filters: 0 [i] Number of exact whitelisted domains: 0 [i] Number of regex whitelist filters: 0 [✓] DNS service is listening [✓] UDP (IPv4) [✓] TCP (IPv4) [✓] UDP (IPv6) [✓] TCP (IPv6) [✓] Pi-hole blocking is enabled Pi-hole version is v5.2.4 (Latest: v5.2.4) AdminLTE version is v5.3.2 (Latest: v5.3.2) FTL version is v5.6 (Latest: v5.6) [cont-init.d] 20-start.sh: exited 0. [cont-init.d] done. [services.d] starting services Starting crond Starting lighttpd Starting pihole-FTL (no-daemon) as root [services.d] done. Edit: Figured it out. This was set to the same IP as another static IP on my network. Changed it, worked perfectly.
  4. Hello there, I ended up finding a killer deal on an APC NetShelter 24U cabinet, which came with a rack-mount CyberPower 1500W UPS (PR1500LCDRTXL2U; https://www.cyberpowersystems.com/product/ups/smart-app-sinewave/pr1500lcdrtxl2u/). I picked up the network adapter (RMCARD205) to have network connectivity as the UPS already had their webGUI/software to manage everything (https://www.cyberpowersystems.com/product/ups/hardware/rmcard205/). On their website it says that it can auto-shutdown "workstations and multiple servers", and "up to 50 clients", so it seems like this is something that is already built in to the NIC. I wanted to set up the UPS / Unraid to gracefully shutdown when the battery gets to a certain percentage (and my Windows 10 gaming desktop if possible too). Currently running Unraid 6.8.3. I've looked around on the forum, but have so far only found threads on connecting the UPS to Unraid via a USB cable. I wanted to have it connect via Ethernet and be able to shut it down that way (especially as I'm planning to have multiple computers that I want it to shutdown). Would anyone know how to do this, or could point me in the right direction? What would I need to configure in Unraid (SNMP, etc) to do this?
  5. Hi all, I had my network card on my server go out about 2 weeks ago and just got it back up and running. It seems like everything is working well so far, except that the DelugeVPN container won't connect to the webGUI. When I click on the docker container icon and click on webGUI, it opens a new tab and I get "this site can't be reached" with the message [local IP] refused to connect. mediavault-diagnostics-20200928-1602.zip I have changed none of my container settings since my old network card, and made sure to give the server the same static IP it had before. What would you recommend? diags posted in case you need them. Edit: Disabled the VPN and it connected. VPN login credentials haven't changed. Reuploaded the current VPN files with no change. Deleted and reinstalled DelugeVPN docker container with no change. Here are the deluge logs too. deluge log1.txt I saw this, not sure if it's helpful: 2020-09-28 17:51:24,746 DEBG 'start-script' stdout output: [warn] Unable to load iptable_mangle module, you will not be able to connect to the applications Web UI or Privoxy outside of your LAN [info] unRAID/Ubuntu users: Please attempt to load the module by executing the following on your host: '/sbin/modprobe iptable_mangle'
  6. OH so that's what that's called! Thank you for explaining that. Thanks! Good to know that's not an issue. I do use Deluge to download torrents, but as of late haven't really known any good places to find torrents. Therefore I generally just stick with NZBs, as the quality and download speed generally seems to be better. I'm open to torrent site recommendations if you know of some. I also only have one indexer (NZBgeek), so if you or anyone knows of a secondary, that would also be much appreciated. Possibly one that does a good amount of anime too. I have an option via Hexchat which works, but it's kinda clunky. Sure, I'm happy to use labels if that would make things work better, just never knew how to. All of the torrents that I've downloaded so far have been directly put into Deluge, and not via Sonarr/Radarr (hence why they don't have any labels). I believe I have the Sonarr/Radarr label settings correct, have created directories that match the labels, and set the "Move completed to" settings for the tv-sonarr/radarr labels in Deluge. Now is that going to be an issue with Sonarr/Radarr picking up the file, renaming it, and placing it in the correct Media directory so Plex can see it? Looks like Sonarr finally got all of the video files over to the Media directory so Plex can see them. But it took like 12 hours to finally move over, where it was just sitting in the queue seemingly not doing anything. It also keeps pulling up an error that it can't connect to the indexer, but then I open the settings and hit test and it says "test succeeded". Attached are the logs in a txt file. I'm having the same issue with some movies being stuck in Radarr and not processing through to the Media library, but I can move over to the binhex-radarr thread for that if that's better? Sonarr Logs 6.6 9.51am
  7. Ok good to know. What is the "Docker run command"? Is that where you edit the mappings, ports, and such? There are a couple labels in Deluge, but they have nothing in them as I'm just using Deluge for the Privoxy/VPN component. Seems like the network traffic is the only thing Deluge should worry about as the rest of the "file management" is handled by SAB or Radarr/Sonarr?
  8. Haha that could be the case! So now that's working, it seems like Sonarr is downloading the files, but not renaming them and moving the to the Media directory so Plex can see them. I downloaded a tv series in Sonarr (and a movie in Radarr) and then it just sat there, not renaming it or moving it anywhere. Wasn't this a feature that Sonarr (or SABnzbd) would do? And then once it's moved it to the main Media Library delete it from the usenet_completed folder? or am I thinking of something else? In the Sonarr settings I have "Rename Episodes" turned on.
  9. Looking at Deluge in the Interface Tab, SSL is unchecked. Seems odd that one would work and one wouldn't.
  10. Like a charm! Thank you so much. Why would SSL prevent that from communicating? Especially when Radarr has SSL enabled?
  11. Ok so I figured out some of my issue. tl;dr I didn't have the current IP address set in my delugeVPN Privoxy, so the other programs routing traffic through there were just hitting a dead end. I went through and rewatched all of SpaceInvaderOne's setup videos one by one for Setting up a privoxy and setting up Radarr. I made sure all of my subscriptions were up to date and had the right information in them. Now, Sonarr keeps giving me an "Unknown exception: The operation has timed out.: 'https://192.168.2.87:8112/json'" error when trying to connect to Deluge, and an immediate "Test was aborted due to an error: Unable to connect to SABnzbd, please check your settings" error when trying to connect to SABnzbd. Both Sonarr and Radarr have the exact same Host, Port, and API settings. Radarr works but Sonarr doesn't. I've tried stopping and starting the container to see if that changes anything and it has no effect. Why would this be happening?
  12. Hi all, I had setup my server awhile ago and it was working great. I let my NZBgeek, PIA, and Usenet.Farm subscriptions lapse and hadn't used Sonarr, Radarr, or SABnzbd in 6 months or so. I reactivated my supscriptions to all of those without otherwise changing anything. When trying to test the connection to NZBgeek indexer, I am getting the "Unable to connect to indexer, check the log for more details" error. Upon looking at the log, I see: 2020-06-04 17:49:11,444 DEBG 'sonarr' stdout output: [Warn] NzbDroneErrorPipeline: Invalid request Validation failed: This continues on the Download Clients tab, where testing the connection in Deluge gives me a "Unknown exception: The operation has timed out.: 'https://192.168.2.87:8112/json'". Testing the connection to SABnzbd gives this message in the log: 2020-06-04 18:00:35,989 DEBG 'sonarr' stdout output: [Warn] NzbDroneErrorPipeline: Invalid request Validation failed: -- Unable to connect to SABnzbd I went back to confirm that the current IP address setting were correct in each program, and that they matched the mappings in the Docker tab of Unraid. Everything that I can see matches, including all the API keys. I'm not sure what's wrong here or where to look to figure it out. Any help would be appreciated, and please let me know what else I can provide for you to help make this easier. Thank you!
  13. I seem to be having an issue in the last few weeks of Radarr not being able to connect to my Newznab indexer. Sonarr however has no issues connecting to it with the same URL and API key. @trurl @Squid @binhex Any thoughts on why this may be?
  14. @trurl Bumping this to see if you may know why it's doing this. I just downloaded Season 1 of a show, and that downloaded, renamed, and moved the files no problem. Clicked to download the second season, which it downloaded but the files are just sitting in the completed folder.
  15. After looking in the folders, it almost seems like it's erroring out before it can complete everything as there are little files left over:
  16. So I checked that and it seems to be ok because I see files coming through: The problem is, there are only some files that it is renaming and moving. If I look at my usenet_complete folder, I see this: What would cause only some files to be renamed and moved? And is there a way for Sonarr to scan for completed files to rename and move?
  17. So I'm having an issue where my episodes are downloading, but Sonarr is not renaming and organizing them properly. SABnzbd seems to be downloading them fine, but then they just sit in my completed folder instead of being moved to the TV Shows directory so Plex can see them. I have the Media Management settings turned on here: But when I look in my completed folder, see this: And Sonarr should be able to move them to the correct directory (rather than copy) so I don't end up with duplicate files, correct?
  18. ...nevermind, apparently it fixed itself? Idk its working now.
  19. So I was trying to run my SABnzbd traffic through privoxy. I put everything into Sonarr and Radarr fine. I put the same information into SABnzbd under the "Listening Port" section, and now I get this error: How do I revert back to the previous information, and set it up properly? @trurl or @Squid, you guys may know the answer to this too. Thanks in advance.
  20. Nevermind, I solved it. I had the container path as /Media when it was mapped in Plex as /media. Thank you for your help!
  21. @trurl So then rather than retrying all sorts of different paths, does it make sense to just delete the libraries in Plex and re add them? because all the files are the same and they've already been named properly and all that, the paths just need to be rebuilt so it can see it properly.
  22. I just put the /Media mapping back as I thought that was all I needed. I didn't have the backup of my config file which was a mistake on my part, so I'm trying to remember exactly how I had the whole thing set up. Here is what I have for the template mappings:
  23. I'm just having all of my media data on the array, and the Downloads share go to the array directly. My thinking is that the only things I would want on the cache drive would be the appdata share, and any other system related shares. It has seemed to work well for me in the past to keep the cache drive as open as possible so if I'm transferring files from my main computer, it has the space to do so. And shouldn't Mover also move files between the disks to balance the load out?