Arcaeus

Members
  • Posts

    172
  • Joined

  • Last visited

Everything posted by Arcaeus

  1. So this is interesting as I believe it was connected and working since I pulled out the other drive. Only after stopping the array and assigning the new disk to the array, then starting it up again did it start throwing a bunch of errors. I can shut everything down and confirm that all the cables are connected properly if you think that may be an issue? So far no CRC errors yet, but I don't think anything is really hitting the disks so I'm not sure.
  2. Got it. Array is started, new diags attached that were ran while the array was started. mediavault-diagnostics-20220504-1116.zip
  3. New diags attached. Array is currently stopped, does it need to be started? The disk was previously in the array but Fix common problems was showing "unable to write to disks 4 & 5 - drive mounted read only or completely full". When I stop the array and attempt to select that disk, I can't choose it from the drop down. Main - array devices (screenshot attached). mediavault-diagnostics-20220504-1106.zip
  4. Ok did this. Thank you so much for your detailed walkthrough, it was super helpful. I liked and thanked the post, not sure if that does anything for you. I started the array, and disk 5 is showing that it can't be entered into the aray (see attached). Do I need to let Unraid do a parity sync / data rebuild before bringing that drive back in? I will start the array and see if there are more CRC errors.
  5. Hello everyone, I'm working on chasing down some CRC errors that may/may not be contributing to some weird disk behavior recently. A couple years ago, I picked up an LSI 9211-8i to increase the amount of disks I could connect. From the beginning I was getting CRC errors, but looking through the forums it seemed to not be a huge issue and essentially left it alone. Money was tight so couldn't afford to replace the card and it didn't seem like a huge issue, so I left it. I would consistently get CRC errors, but everything seemed to be working so I didn't change anything. Last week I had a disk 4 in my array get disabled after a bunch of read errors. I followed the steps from Unraid to stop the array, remove the disk, start/stop the array, then re-add the disk and let the system rebuild. All seemed ok. Shortly after that, disk 7 went down. I tried the same process but eventually it wouldn't allow me to re-add it back into the array. Starting to worry now, I ordered new 8087 to sata cables and a new 4TB disk. I've been planning a new NAS build so got 2x WD Gold 16TB disks as they were on a good sale too. The plan so far is to copy all 32TB spread out across the array to these two disks as a local backup. Once everything arrived, I pulled out the non-working drive, replaced the cables making sure they weren't kinked or jammed up next to each other, and installed the 16TB disks (connected via SATA cables directly to the mobo). Ran a preclear on the 3 new disks with the 4TB passing no problem (16's are still going). I stopped the array, added the new disk in, and spun it back up again. When I did, disk 4 started to throw a bunch of read errors again and get disabled. So I stopped the array, did the remove/re-add dance like above, and spun it back up. Disk 7 (new disk) went through a very fast parity sync / data rebuild at 3.2GB/s (very odd for a 7200rpm HGST drive) and then showed green in the array. Meanwhile disk 4 is moving through the data rebuild process seemingly fine, yet fix common problems is showing "unable to write to disks 4 & 5 - drive mounted read only or completely full" with disk 5 having 341 million + read errors and climbing. 4 & 5 are at 92% and 90% capacity respectively. It's also showing /var/log is 100% full probably due to all the errors. To me this points to a faulty HBA card that would just need to be replaced, but is there something else that it could be before I buy one? Diags posted below. Build: i7-960 @ 3.2 12GB DDR3 RAM Gigabyte G1 Guerrilla mobo LSI 9211-8i in PCIe x16 (maybe Gen 1?) slot single parity drive Antec 1000w PSU with 9 drives connected via SATA power cables (that came with the PSU) and 5 drives connected via molex to 2x SATA connections (total 6 connections with 5 disks connected). All power cables are original except for the molex to sata adapters. mediavault-diagnostics-20220501-1236.zip
  6. I would like to request adding drivers for the Killer E2100 onboard network chip. It's pretty old so I understand if this doesn't want to be supported, but would be helpful for me and possibly others. Otherwise, is there a way I could do it manually?
  7. Just checked this morning and apparently the issue has been resolved. My guess is it was something to do with their site? Weird.
  8. Just started having an odd issue. Out of nowhere Sonarr and Radarr can't connect to the indexer. I have not changed any settings and it was working perfectly earlier today. I can log onto the indexer fine through a web browser. Checking the logs, I get: 2021-11-19 16:14:51,760 DEBG 'sonarr' stdout output: [Warn] SonarrErrorPipeline: Invalid request Validation failed: -- : Unable to connect to indexer, check the log for more details 2021-11-19 16:19:31,446 DEBG 'sonarr' stdout output: [Warn] Newznab: Unable to connect to indexer [v3.0.6.1342] System.Net.WebException: Error: ProtocolError: 'https://api.nzbgeek.info/api?t=caps&apikey=*********************' ---> System.Net.WebException: Error: ProtocolError at System.Net.WebConnection.InitConnection (System.Net.WebOperation operation, System.Threading.CancellationToken cancellationToken) [0x0015e] in /build/mono/src/mono/mcs/class/System/System.Net/WebConnection.cs:282 at System.Net.WebOperation.Run () [0x00052] in /build/mono/src/mono/mcs/class/System/System.Net/WebOperation.cs:268 at System.Net.WebCompletionSource`1[T].WaitForCompletion () [0x0008e] in /build/mono/src/mono/mcs/class/System/System.Net/WebCompletionSource.cs:111 2021-11-19 16:19:31,446 DEBG 'sonarr' stdout output: at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000e8] in /build/mono/src/mono/mcs/class/System/System.Net/HttpWebRequest.cs:956 at System.Net.HttpWebRequest.GetResponse () [0x0000f] in /build/mono/src/mono/mcs/class/System/System.Net/HttpWebRequest.cs:1218 one.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x00123] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:81 --- End of inner exception stack trace --- at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x001c0] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:107 at NzbDrone.Common.Http.HttpClient.ExecuteRequest (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookieContainer) [0x00086] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:126 at NzbDrone.Common.Http.HttpClient.Execute (NzbDrone.Common.Http.HttpRequest request) [0x00008] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:59 at NzbDrone.Common.Http.HttpClient.Get (NzbDrone.Common.Http.HttpRequest request) [0x00007] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:281 one.Core.Indexers.Newznab.NewznabCapabilitiesProvider.FetchCapabilities (NzbDrone.Core.Indexers.Newznab.NewznabSettings indexerSettings) [0x000a1] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:64 at NzbDrone.Core.Indexers.Newznab.NewznabCapabilitiesProvider+<>c__DisplayClass4_0.<GetCapabilities>b__0 () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:36 at NzbDrone.Common.Cache.Cached`1[T].Get (System.String key, System.Func`1[TResult] function, System.Nullable`1[T] lifeTime) [0x000b1] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Cache\Cached.cs:104 at NzbDrone.Core.Indexers.Newznab.NewznabCapabilitiesProvider.GetCapabilities (NzbDrone.Core.Indexers.Newznab.NewznabSettings indexerSettings) [0x00020] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:36 at NzbDrone.Core.Indexers.Newznab.Newznab.get_PageSize () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\Newznab.cs:24 at NzbDrone.Core.Indexers.Newznab.Newznab.GetRequestGenerator () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\Newznab.cs:28 at NzbDrone.Core.Indexers.HttpIndexerBase`1[TSettings].TestConnection () [0x00007] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\HttpIndexerBase.cs:335 2021-11-19 16:19:31,457 DEBG 'sonarr' stdout output: [Warn] SonarrErrorPipeline: Invalid request Validation failed: -- : Unable to connect to indexer, check the log for more details Any ideas why this suddenly just died? This popped up after the last log I posted: 2021-11-19 16:30:01,756 DEBG 'sonarr' stdout output: [Warn] SonarrErrorPipeline: Invalid request Validation failed: -- : Unable to connect to indexer, check the log for more details 2021-11-19 16:30:28,102 DEBG 'sonarr' stdout output: [Error] ProxyCheck: Proxy Health Check failed [v3.0.6.1342] System.Net.WebException: The operation has timed out.: 'https://services.sonarr.tv/v1/ping' ---> System.Net.WebException: The operation has timed out. at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000e8] in /build/mono/src/mono/mcs/class/System/System.Net/HttpWebRequest.cs:956 at System.Net.HttpWebRequest.GetResponse () [0x0000f] in /build/mono/src/mono/mcs/class/System/System.Net/HttpWebRequest.cs:1218 one.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x00123] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:81 --- End of inner exception stack trace --- at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x001c0] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:107 at NzbDrone.Common.Http.HttpClient.ExecuteRequest (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookieContainer) [0x00086] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:126 at NzbDrone.Common.Http.HttpClient.Execute (NzbDrone.Common.Http.HttpRequest request) [0x00008] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:59 at NzbDrone.Core.HealthCheck.Checks.ProxyCheck.Check () [0x00067] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\HealthCheck\Checks\ProxyCheck.cs:46 2021-11-19 16:30:56,369 DEBG 'sonarr' stdout output: [Error] TaskExtensions: Task Error [v3.0.6.1342] System.Net.WebException: Error: ProtocolError: 'https://services.sonarr.tv/v1/time' ---> System.Net.WebException: Error: ProtocolError Looks like some sort of proxy health check failure or protocol error? Thanks in advance.
  9. Understood. Turns out it was working and I just needed to restart my docker container for it to show up. Thanks again for your help!
  10. Hi there, I'm currently running the rclone plugin (not docker container) to sync to my Google drive. I set it up last week (via SpaceInvaderOne's video) and it worked great for 2TB worth of data. I could see and access the cloud drives in Krusader (under disks) with no issues. I restarted my server to install an Nvidia driver update and when the server came back up, the cloud drives would not mount. I tried to manually run the script to mount the cloud drives based on the help output that it specified (rclone mount remote:path /path/to/mountpoint [flags]). "encrypt" is just the encryption for Google Drive, both remotes are going to the same account, just different folders. Here is the script from user scripts that I ran: #!/bin/bash #---------------------------------------------------------------------------- # first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access # there are 4 entries below as in the video I had 4 remotes amazon, dropbox, google and secure # you only need as many as what you need to mount for dockers or a network share mkdir -p /mnt/disks/Google mkdir -p /mnt/disks/encrypt # This section mounts the various cloud storage into the folders that were created above. rclone mount Google: /mnt/disks/Google --allow-non-empty --max-read-ahead 1024k --allow-other rclone mount encrypt: /mnt/disks/encrypt --allow-non-empty --max-read-ahead 1024k --allow-other When trying to run the script right now (9/23 @ 12:16pm), the screen just sits there and nothing happens. Checking the logs, it just says Script Starting Sep 20, 2021 17:39.18 Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone Mount Script/log.txt Script Finished Sep 20, 2021 17:39.18 Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone Mount Script/log.txt I say all of that just to give background on where I'm at. I would like to set up binhex's docker container running rclone so that I don't have to run the commands directly off the server. I will be doing some large file transfers that may take a few days with my internet upload speed, so would like to be able to shut off / restart the remote computer that I access the webGUI of the server with, and still be able to see the progress of the upload. I've read that I could use a "screen" command, but if there is a way to do it through this docker container it seems like that would be an easier and more straightforward option. I would like it run in a sync situation, so that it adds missing files to the cloud storage, and deletes cloud files if they are not on the local server (not just blind copy everything each time it syncs). I also would like the ability for Plex to be able to pull from the cloud repository should I need to. So. Now that I have the remotes set up, it did work at one point, and i already have the Google Drive API keys and such set up, where do I start on getting the drives to mount and setting up the docker container instead of just the command line plugin? If I need to use the plugin for this, that's fine, just trying to figure out the best way to set everything up.
  11. Hello all, posting here if anyone else runs into this issue. Saw there was an update to Unraid 6.9.1 this morning. Update installed with no issue and restarted my server. On reboot, my Plex container was not started (should be auto-start). When I went to start it, it just gave me the "execution error: bad parameter" error. I tried editing the docker settings, and when I went back to the Docker tab, Plex was gone. This was odd as it was working perfectly last night. I tried to add the container again from my previous image but kept getting this error: I searched for "nvidia driver" and installed this one as it seemed like the previous "Unraid Nvidia" plugin that I had downloaded before wasn't there anymore (see screenshot). Once that finished installing, I re-added the container from my previous image and it fired up no problem. Hope this helps.
  12. Hello All, Set up the Pi-hole docker per the video, but just receiving a "Site cannot connect" message when attempting to open the webGUI. It opened the webGUI very breifly, but when i clicked login, it just gave me the "cannot connect" message again, and has sat there since. Attempted to restart the container a few times but to no avail. I was trying to set my time zone (eastern time) but did not see the exact wording to use when setting up the container. I put in "Eastern Standard Time" and when it was updating the container I saw "America/New York" for tiem zone so I'm assuming it grabbed the right thing. Here are the logs: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] 01-resolver-resolv: applying... [fix-attrs.d] 01-resolver-resolv: exited 0. [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 20-start.sh: executing... ::: Starting docker specific checks & setup for docker pihole/pihole [i] Installing configs from /etc/.pihole... [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone! chown: cannot access '': No such file or directory chmod: cannot access '': No such file or directory chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory Converting DNS1 to PIHOLE_DNS_ Converting DNS2 to PIHOLE_DNS_ Setting DNS servers based on PIHOLE_DNS_ variable ::: Pre existing WEBPASSWORD found DNSMasq binding to custom interface: br0 Added ENV to php: "PHP_ERROR_LOG" => "/var/log/lighttpd/error.log", "ServerIP" => "192.168.2.88", "VIRTUAL_HOST" => "192.168.2.88", Using IPv4 ::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early)) https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts ::: Testing pihole-FTL DNS: FTL started! ::: Testing lighttpd config: Syntax OK ::: All config checks passed, cleared for startup ... ::: Enabling Query Logging [i] Enabling logging... ::: Docker start setup complete [i] Neutrino emissions detected... [i] Using libz compression [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts [i] Received 60887 domains [i] Number of gravity domains: 60887 (60887 unique domains) [i] Number of exact blacklisted domains: 0 [i] Number of regex blacklist filters: 0 [i] Number of exact whitelisted domains: 0 [i] Number of regex whitelist filters: 0 [✓] DNS service is listening [✓] UDP (IPv4) [✓] TCP (IPv4) [✓] UDP (IPv6) [✓] TCP (IPv6) [✓] Pi-hole blocking is enabled Pi-hole version is v5.2.4 (Latest: v5.2.4) AdminLTE version is v5.3.2 (Latest: v5.3.2) FTL version is v5.6 (Latest: v5.6) [cont-init.d] 20-start.sh: exited 0. [cont-init.d] done. [services.d] starting services Starting crond Starting lighttpd Starting pihole-FTL (no-daemon) as root [services.d] done. Stopping cron Stopping lighttpd Stopping pihole-FTL [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] 01-resolver-resolv: applying... [fix-attrs.d] 01-resolver-resolv: exited 0. [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 20-start.sh: executing... ::: Starting docker specific checks & setup for docker pihole/pihole [i] Installing configs from /etc/.pihole... [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone! chown: cannot access '': No such file or directory chmod: cannot access '': No such file or directory chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory Converting DNS1 to PIHOLE_DNS_ Converting DNS2 to PIHOLE_DNS_ Setting DNS servers based on PIHOLE_DNS_ variable ::: Pre existing WEBPASSWORD found DNSMasq binding to custom interface: br0 Added ENV to php: "PHP_ERROR_LOG" => "/var/log/lighttpd/error.log", "ServerIP" => "192.168.2.88", "VIRTUAL_HOST" => "192.168.2.88", Using IPv4 ::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early)) https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts ::: Testing pihole-FTL DNS: FTL started! ::: Testing lighttpd config: Syntax OK ::: All config checks passed, cleared for startup ... ::: Enabling Query Logging [i] Enabling logging... ::: Docker start setup complete [i] Neutrino emissions detected... [i] Using libz compression [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts [i] Received 60887 domains [i] Number of gravity domains: 60887 (60887 unique domains) [i] Number of exact blacklisted domains: 0 [i] Number of regex blacklist filters: 0 [i] Number of exact whitelisted domains: 0 [i] Number of regex whitelist filters: 0 [✓] DNS service is listening [✓] UDP (IPv4) [✓] TCP (IPv4) [✓] UDP (IPv6) [✓] TCP (IPv6) [✓] Pi-hole blocking is enabled Pi-hole version is v5.2.4 (Latest: v5.2.4) AdminLTE version is v5.3.2 (Latest: v5.3.2) FTL version is v5.6 (Latest: v5.6) [cont-init.d] 20-start.sh: exited 0. [cont-init.d] done. [services.d] starting services Starting crond Starting lighttpd Starting pihole-FTL (no-daemon) as root [services.d] done. Edit: Figured it out. This was set to the same IP as another static IP on my network. Changed it, worked perfectly.
  13. Hello there, I ended up finding a killer deal on an APC NetShelter 24U cabinet, which came with a rack-mount CyberPower 1500W UPS (PR1500LCDRTXL2U; https://www.cyberpowersystems.com/product/ups/smart-app-sinewave/pr1500lcdrtxl2u/). I picked up the network adapter (RMCARD205) to have network connectivity as the UPS already had their webGUI/software to manage everything (https://www.cyberpowersystems.com/product/ups/hardware/rmcard205/). On their website it says that it can auto-shutdown "workstations and multiple servers", and "up to 50 clients", so it seems like this is something that is already built in to the NIC. I wanted to set up the UPS / Unraid to gracefully shutdown when the battery gets to a certain percentage (and my Windows 10 gaming desktop if possible too). Currently running Unraid 6.8.3. I've looked around on the forum, but have so far only found threads on connecting the UPS to Unraid via a USB cable. I wanted to have it connect via Ethernet and be able to shut it down that way (especially as I'm planning to have multiple computers that I want it to shutdown). Would anyone know how to do this, or could point me in the right direction? What would I need to configure in Unraid (SNMP, etc) to do this?
  14. Hi all, I had my network card on my server go out about 2 weeks ago and just got it back up and running. It seems like everything is working well so far, except that the DelugeVPN container won't connect to the webGUI. When I click on the docker container icon and click on webGUI, it opens a new tab and I get "this site can't be reached" with the message [local IP] refused to connect. mediavault-diagnostics-20200928-1602.zip I have changed none of my container settings since my old network card, and made sure to give the server the same static IP it had before. What would you recommend? diags posted in case you need them. Edit: Disabled the VPN and it connected. VPN login credentials haven't changed. Reuploaded the current VPN files with no change. Deleted and reinstalled DelugeVPN docker container with no change. Here are the deluge logs too. deluge log1.txt I saw this, not sure if it's helpful: 2020-09-28 17:51:24,746 DEBG 'start-script' stdout output: [warn] Unable to load iptable_mangle module, you will not be able to connect to the applications Web UI or Privoxy outside of your LAN [info] unRAID/Ubuntu users: Please attempt to load the module by executing the following on your host: '/sbin/modprobe iptable_mangle'
  15. OH so that's what that's called! Thank you for explaining that. Thanks! Good to know that's not an issue. I do use Deluge to download torrents, but as of late haven't really known any good places to find torrents. Therefore I generally just stick with NZBs, as the quality and download speed generally seems to be better. I'm open to torrent site recommendations if you know of some. I also only have one indexer (NZBgeek), so if you or anyone knows of a secondary, that would also be much appreciated. Possibly one that does a good amount of anime too. I have an option via Hexchat which works, but it's kinda clunky. Sure, I'm happy to use labels if that would make things work better, just never knew how to. All of the torrents that I've downloaded so far have been directly put into Deluge, and not via Sonarr/Radarr (hence why they don't have any labels). I believe I have the Sonarr/Radarr label settings correct, have created directories that match the labels, and set the "Move completed to" settings for the tv-sonarr/radarr labels in Deluge. Now is that going to be an issue with Sonarr/Radarr picking up the file, renaming it, and placing it in the correct Media directory so Plex can see it? Looks like Sonarr finally got all of the video files over to the Media directory so Plex can see them. But it took like 12 hours to finally move over, where it was just sitting in the queue seemingly not doing anything. It also keeps pulling up an error that it can't connect to the indexer, but then I open the settings and hit test and it says "test succeeded". Attached are the logs in a txt file. I'm having the same issue with some movies being stuck in Radarr and not processing through to the Media library, but I can move over to the binhex-radarr thread for that if that's better? Sonarr Logs 6.6 9.51am
  16. Ok good to know. What is the "Docker run command"? Is that where you edit the mappings, ports, and such? There are a couple labels in Deluge, but they have nothing in them as I'm just using Deluge for the Privoxy/VPN component. Seems like the network traffic is the only thing Deluge should worry about as the rest of the "file management" is handled by SAB or Radarr/Sonarr?
  17. Haha that could be the case! So now that's working, it seems like Sonarr is downloading the files, but not renaming them and moving the to the Media directory so Plex can see them. I downloaded a tv series in Sonarr (and a movie in Radarr) and then it just sat there, not renaming it or moving it anywhere. Wasn't this a feature that Sonarr (or SABnzbd) would do? And then once it's moved it to the main Media Library delete it from the usenet_completed folder? or am I thinking of something else? In the Sonarr settings I have "Rename Episodes" turned on.
  18. Looking at Deluge in the Interface Tab, SSL is unchecked. Seems odd that one would work and one wouldn't.
  19. Like a charm! Thank you so much. Why would SSL prevent that from communicating? Especially when Radarr has SSL enabled?
  20. Ok so I figured out some of my issue. tl;dr I didn't have the current IP address set in my delugeVPN Privoxy, so the other programs routing traffic through there were just hitting a dead end. I went through and rewatched all of SpaceInvaderOne's setup videos one by one for Setting up a privoxy and setting up Radarr. I made sure all of my subscriptions were up to date and had the right information in them. Now, Sonarr keeps giving me an "Unknown exception: The operation has timed out.: 'https://192.168.2.87:8112/json'" error when trying to connect to Deluge, and an immediate "Test was aborted due to an error: Unable to connect to SABnzbd, please check your settings" error when trying to connect to SABnzbd. Both Sonarr and Radarr have the exact same Host, Port, and API settings. Radarr works but Sonarr doesn't. I've tried stopping and starting the container to see if that changes anything and it has no effect. Why would this be happening?
  21. Hi all, I had setup my server awhile ago and it was working great. I let my NZBgeek, PIA, and Usenet.Farm subscriptions lapse and hadn't used Sonarr, Radarr, or SABnzbd in 6 months or so. I reactivated my supscriptions to all of those without otherwise changing anything. When trying to test the connection to NZBgeek indexer, I am getting the "Unable to connect to indexer, check the log for more details" error. Upon looking at the log, I see: 2020-06-04 17:49:11,444 DEBG 'sonarr' stdout output: [Warn] NzbDroneErrorPipeline: Invalid request Validation failed: This continues on the Download Clients tab, where testing the connection in Deluge gives me a "Unknown exception: The operation has timed out.: 'https://192.168.2.87:8112/json'". Testing the connection to SABnzbd gives this message in the log: 2020-06-04 18:00:35,989 DEBG 'sonarr' stdout output: [Warn] NzbDroneErrorPipeline: Invalid request Validation failed: -- Unable to connect to SABnzbd I went back to confirm that the current IP address setting were correct in each program, and that they matched the mappings in the Docker tab of Unraid. Everything that I can see matches, including all the API keys. I'm not sure what's wrong here or where to look to figure it out. Any help would be appreciated, and please let me know what else I can provide for you to help make this easier. Thank you!
  22. I seem to be having an issue in the last few weeks of Radarr not being able to connect to my Newznab indexer. Sonarr however has no issues connecting to it with the same URL and API key. @trurl @Squid @binhex Any thoughts on why this may be?