Jump to content

jamesp469

Members
  • Content Count

    22
  • Joined

  • Last visited

Community Reputation

1 Neutral

About jamesp469

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm having an issue with RSS feeds over VPN. The provider I am pulling the RSS feeds from requires that the same IP address used to set up the feed is the same IP address used to load the feed/download from. I am running this container on my server, and set everything up using my desktop using the VPN feed. The issue I am running into is that, when my desktop is not on VPN and I attempt to reload the feed through the container's GUI (container is on correct VPN IP), I get errors that the IP does not match the VPN ID. I've ran multiple checks with checkmyip.torrentprivacy.com, and the container's external IP is still showing as the correct VPN ID. However, if I log into my VPN service on the desktop and try to reload the feed, it works perfectly. This is only an issue with RSS; when I download torrents off the VPN-connected desktop, disconnect from VPN, and load them to the rTorrent container, they download as expected. I will leave the feed alone for a few weeks to see if it is able to automatically pull from the provider, but am a bit befuddled as to why this isn't working as (I) expected.
  2. Thank you! This fixed it. Not sure why the default gateway assignment on eth1 was set like that, i don't recall setting that.
  3. After upgrading to 6.8, my docker tab hangs with no resolution. Actual docker URLs are still accessible; diagnostics attached. I'm also seeing the following error in the log: Dec 11 09:09:09 unRAID nginx: 2019/12/11 09:09:09 [error] 7631#7631: *21 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.190, server: , request: "GET /plugins/dynamix.docker.manager/include/DockerContainers.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "MYADDRESS.unraid.net", referrer: "https://MYADDRESS.unraid.net/Docker" Also, all of my plugins are showing as status "unknown" and seem to hang, with the following error: Dec 11 09:16:03 unRAID root: Fix Common Problems: Error: Unable to communicate with GitHub.com Thanks! unraid-diagnostics-20191211-0917.zip
  4. I'm seeing this issue as well, plus all of my installed plugins are listed as status "unknown". Diagnostics attached, thanks! unraid-diagnostics-20191211-0917.zip
  5. I'm running into issues where Mylar regularly disables my search providers. After digging into the logs, it looks like it occurs after daily API limits are hit (WARNING DAILY API limit reached. Disabling provider usage until 12:01am) however the provider usage is not automatically re-enabled. I have always had to manually enable the provider. Is this something I should go to the Mylar team on, or something that can be managed at the docker level (force re-enabling search providers)
  6. The repair service recommended cloning/moving the data off of the drive whenever possible. It was only a PCB replacement, so not sure if that was a liability-relief statement or not.
  7. UPDATE: I was able to get one data drive repaired, and I expect it to be delivered back here tomorrow. Can someone point me to the correct process for turning my system back up? I have brand new drives to replace the blown 2nd parity drive and the other data drive that was not repaired, as well as a new drive to replace the repaired drive once my parity is rebuilt.
  8. Thanks, I've sent one of the data drives in to get tested and see if it a PCB replacement would fix it. Hopefully it does, and then I can re-install that plus two drives to replace the parity2 and other data drive and hopefully be back in business.
  9. So, in this situation, I have 5/8 drives in my array still spinning up and presumably with no data issues: dual cache, parity1, and 2of4 disks. Is there any scenario where I could rebuild partial data to a new drive using the parity drive, or should I just clear the drive and start fresh? Also, will i have any issues keeping the existing data on my cache pool and two working drives when i start a new array config? Thanks in advance.
  10. Thanks for responding. The mistake was using a modular PSU cable from my old PSU/build in my new PSU/build, leaving me with five drives that won't spin up (two are net new to the system). So that won't be happening again. But if that's the case regarding repairing the parity drive, then yes it's probably safer to just bite the bullet on repairing the two data drives.
  11. I won't bore anyone with the details, but after some compounding bonehead mistakes I'm currently stuck with some drives that will not spin up (neither in my server nor a USB enclosure). Three drives: two data drives and one of my two parity drives. Not sure if this make sense, but if I wanted to use a data recovery service on that parity drive would I then be able to replace the two dead data drives and not have any data loss?
  12. Glad I misunderstood this, thanks. This should be a fairly painless migration then.
  13. I'm running out of both drive space and drive slots on my current server, and am looking at migrating to a smaller server chassis connected to a DAS unit (Levono SA120). After doing significant research, this appears to be the optimum solution to expand my server in my existing wallmount cabinet (AR112, short depth). I'd be moving 2x parity drives, a 2x SSD cache pool, 3x drives directly connected to the motherboard, and 3x drives connected through an icydock 2x 5.25" to 3x 3.5" backplane cage (MB153SP-B) over to a server that would optimally hold just the parity drives and cache drives; everything else would be housed in the DAS. Any advice on how best to make this transfer without messing up my environment/losing data? I'm reusing the motherboard (X11SSM-F), and it's my understanding that I'd want each drive plugged back into the same SATA port on the MB after migration. This obviously wouldn't be possible if I'm moving the majority of the drives over to the DAS. Also, how much of an extra headache would it be to try to upgrade the parity drives during this migration? Thanks in advance for any guidance.
  14. I'm having issues with my main indexer regularly being disabled in this container. It doesn't disable for a short time period, it disables the indexer permanently until i manually enable it. Anyone else having this issue? 2019-02-28 02:00:15WARNINGUnavailable indexer detected. Disabling for a short duration and will try again. 2019-02-28 02:00:15WARNINGUnable to retrieve search results from dognzb [Status Code returned: 503] 2019-02-27 19:22:19WARNING[ERROR] string index out of range 2019-02-27 18:08:40WARNING[ERROR] string index out of range 2019-02-27 17:37:07WARNING[ERROR] string index out of range 2019-02-27 17:37:07WARNING[ERROR] string index out of range 2019-02-27 17:31:52WARNING[ERROR] string index out of range 2019-02-27 17:16:38WARNING[ERROR] string index out of range 2019-02-27 07:09:26ERRORCould not locate exceptions.csv file. Make sure it is in datadir: /config/mylar
  15. Thank you! Changing /mnt/user/appdata/radarr/ to /mnt/cache/appdata/radarr/ has appeared to solve the issue. Edit - I did not delete/remove any data