Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by Mesias

  1. Hi there, I posted the problem I'm facing on the Unraid general support but I received no replies. I'm hoping I can get some feedback here. I changed from Letsencrypt to Swag recently and after the change i lose access to multiple sections in Unraid and all dockers stop functioning properly... I lose access thru the UI to the Dashboard, Docker and the bottom portion of the Main tab. The console gets unresponsive to any docker command and to the "powerdown" capability so every time I restart the system it has to do a parity check. Of course none of the apps using the reverse proxy are working to the outside. The migration was based on a fresh installation and I just copied the conf files from Letsecrypt. The Unraid logs show the following error (it varies depending on the page I'm trying): nginx: 2020/10/12 10:06:48 [error] 32315#32315: *246154 upstream timed out (110: Connection timed out) while reading response header from upstream, client ... upstream: "fastcgi://unix:/var/run/php5-fpm.sock" ... Any suggestion would be greatly appreciated. Please help!
  2. I lost access to the Dashboard, Docker, Apps... any suggestion would be greatly appreciated.
  3. Hi there, I'm having an issue with the docker service after I added a cache drive and updated Letsencrypt to Swag. After restarting the system the logs show the following error while I can't access the different docker's UIs: nginx: 2020/10/12 10:06:48 [error] 32315#32315: *246154 upstream timed out (110: Connection timed out) while reading response header from upstream, client ... upstream: "fastcgi://unix:/var/run/php5-fpm.sock" ... I also lose the "powerdown" capability thru the console or the UI so every time I restart the system it has to do a parity check. Any ideas? Please help! mesias-diagnostics-20201012-1029.zip
  4. Yes, it's a "downloads" pivot folder monitored by other apps. The system was doing just fine with the small SSD, the issue started once I started downloading too many large files. The files needed to stay longer for seeding which tested the limit of the drive. Definitely poor balance from my side, nothing to blame on the software. I now want to increase the cache size to have more capacity. After reading the FAQ in more detail, I'll balance the cache into single mode.
  5. The UI is showing the cache space increased after adding the drive. No writes into the new drive so I assume that is just a limitation. Well, I deleted some files and now the cache utilization is below the capacity of the original, smaller cache drive. Can I go ahead and perform a balance to convert the cache to Raid0 while having still some data in the drive?
  6. Hi there, I'm not sure how to best describe the issue I'm experiencing so I'll make a list of what happened: 1. I had Deluge downloading very large files, all working as expected 2. The temporary download folder in the cache drive reached its limit after a while. Basically the single SSD I had at that time was filled to the brim. 3. I ignored the system for a couple of days which resulted in all docker apps crashing. They were all on red and the Fix Common Problems plugin was showing some error with the docker image. 4. I added a new SSD to the system, which was recognized immediately, and a cache pool was created. This didn't resolve the problem, all errors continued. No docker apps available, no updates possible. 5. Thinking the image docker was corrupted I deleted docker.img and added one app, Deluge. Deluge worked just fine and I could install maybe one more app but after a while I couldn't install anything else. The docker app install ended every time with "Error: open /var/lib/docker/tmp/GetImageBlobXXXXXXXXX: read-only file system". One of the files still downloading in Deluge showed an "Input/output error" error. 6. I've been reading the forums for a couple of days but all I've been able to find is related to a full docker.img file which is not the case. Deluge is currently only using 7% of the file. What else could be causing this issue? Attached you can find the diagnostics folder. mesias-diagnostics-20201008-2137.zip Thanks in advance.
  7. I initially made a huge bulk import... all releases are in their folders. Clearing the queue definitely resolved the issue. I went to the Queue tab in the Activity screen and delete all from download client. Now the logs are clean of import service errors. Thanks again for all the assistance!
  8. That didn't help... The import service is looking for versions never downloaded... all releases in both libraries (Radaar and Sonaar) are flagged "downloaded". I'm stuck, I don't understand yet the process of importing. Why, if a release is already flagged as downloaded, the import service continues trying to retrieve it from the download folder?
  9. That's exactly what's happening... I tried a fresh request and it worked just fine. On the other hand, Sonaar and Radarr are still looking for releases already downloaded. Not sure how to clear that queue. Thanks for all the help!
  10. I changed the path to the Sabnzbd categories and it didn't help. I then changed its download folders to absolute paths but it didn't resolve the issue. Logs continue showing the default path. Note I restarted all three dockers. Screenshot to User Folders in Sabnzbd: Previous step changed the path for the categories folders. They are now relative to /downloads: After all this, the logs keep showing the default path: Is there any other log I can enable that can provide more info?
  11. I'm going to try changing the path of Sabnzbd categories to absolute paths. Right now they are using the default, relative folders are based on: /config/Downloads/complete. This should fix it... Thanks for pointing this out!
  12. I posted this in Radaar and Sonaar support forums but it looks to me this is a Docker issue... Something is off and I can't find where the issue lies... Radarr and Sonarr are not looking for the downloads folder in the correct path. Sonarr log message: Import failed, path does not exist or is not accessible by Sonarr: /config/Downloads/complete/tv/ Radarr log message: Import failed, path does not exist or is not accessible by Radarr: /config/Downloads/complete/movies/ For some reason Raddar and Sonnar are looking under the config folder instead of the path defined for the downloads folder. Here are the details of my setup: Both Sabnzbd and Radarr are installed dockers in an UNRAID box Sabnzbd path to downloads folder as defined in the docker: /downloads -> /mnt/user/appdata/sabnzbd/Downloads/complete/ Raddar path to downloads folder as defined in the docker: /downloads -> /mnt/user/appdata/sabnzbd/Downloads/complete/ Host for Sabnzbd in Raddar is the local IP address, "localhost" didn't work Permissions for completed downloads under Sabnzbd: 775 File chmod mask in Raddar: 0775 Any file browser within Raddar is able to see the correct downloads folder Screenshot of Docker settings: Screenshot of a file browser window within Sonarr showing it can see the /downloads folder and its content: Any idea where to look? Radarr Version: Version: Mono Version: (tarball Wed Nov 8 20:37:08 UTC 2017) AppData directory: /config Startup directory: /opt/radarr Sonarr Version: Version: Mono Version: AppData directory: /config Startup directory: /opt/NzbDrone
  13. Actually this morning I resumed Screen and everything seems to be right. Updates are showing in intervals as usual but after a while it stopped receiving them again... very weird. I'm just letting it run until HD activity light goes off.
  14. Hello there... I'm running the preclear script (v1.13) on a 4TB HGST Deskstar HDS724040A through Screen in a telnet session. (5.0-rc13) The issue is the telnet windows stopped receiving updates. After an hour or so the screen just stop reseting the temperature, the elapsed time clock, etc. I tried twice running it in this drive with the same result but at different times. Fisrt when writing zeroes and the second time while the initial reading. The first time I thought the script halted, I panicked, and I restarted the box. The second time I looked deeper - I still see the preclear_disk process taking CPU time at intervals and I also see the dd command doing the same. Is there any other process I should check before assuming the script failed? Have anyone else experienced this before? I tried searching and I found only one other thread with a similar situation but it didn't help much. Should I just let it run until the light in the box go off? Your advice is greatly appreciated.