zbron

Members
  • Posts

    20
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

zbron's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. That’s where you’re downloads folder is. Not useful for Sonarr and Radarr functionality, but useful if you want to manually trigger downloads via Jackett
  2. Wondering if there’s any way to accomplish This using the current build of Binhex-Sonarr, basically I’d like to avoid needing to manually import episodes with TBA titles but I don’t see this option in the client. also thanks for all the great containers you provide!!
  3. I was having the same issue this AM (proxy was set up on Radarr and Sonarr, not Jackett, and everything stopped working). 100% fixed via your FAQ linked above as I removed the config from Radarr and Sonarr and added HTTP Proxy settings to Jackett, directing it at the DelugeVPN container's proxy port, while also adding the Jackett port to Container Variable: ADDITIONAL_PORTS within Deluge's docker image settings. Hope this helps someone else! And @binhex, you are incredible. Thank you for all you do for us!
  4. Thanks @JorgeB will try that now and hope it solves the issue.
  5. ChatNoir - thanks for spotting this! Any thoughts on how to fix? Sounds like I have a bunch of research to do. Hopefully the drive isn't dead... it's been in my cache pool for only 1.5 years but I guess the excessive write issue could have burnt it out. Additionally I'd appreciate any insight you can offer as to whether this is a hardware issue (replace the NVME drive) or a software issue (nuke the cache, reformat, restore from backup). I performed a BTRFS scrub (no errors) and also a filesystem check (output attached)
  6. Currently running 6.8.3 and having a new and strange issue I can't figure out. Server had been running for multiple months with little to no issue, but recently every few days my log will fill up overnight (usually between 4:30AM - 5:00AM) which basically causes the server to cease to function (all dockers shut down except Plex, which ceases to function) and can only be fixed with a reboot. While the reboot fixes the issue, it keeps occurring and I'd like to fix it at the source. All my dockers have the extra parameter: --log-opt max-size=50m --log-opt max-file=1 Currently my docker log rotation is on I am no VMs Below I've attached a couple of items I thought might help: -sm /var/log/* output All Diagnostics files (strangely my syslog doesn't have any entries after ~2:00AM) Mover Settings Mover Tuning Settings The notification I received from Fix Common Problems telling me the log was full SSD Trim Settings Thanks in advance for the help Unraid community! Diagnostics.zip
  7. Thank you Squid, really appreciate the help.
  8. Will this blow up everything/settings/etc.? Or will I simply need to make sure all dockers are configured the same way as before?
  9. Thank you, I'll add that to all my containers and report back if it doesn't work. Is there any way to tell from what I posted which docker was creating the gigantic log? Assuming this does work, what's the easiest way to shrink the docker.img file down to a more reasonable 35GB or so? Or is it not worth it?
  10. Forgot to be more specific in previous post, the exact same 45G file showed up in response to du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60, so I followed up by using the same find /var/lib/docker/containers/ -type f -name "*.log" -delete that seemed to work (temporarily) yesterday. These two commands were run before I posted the output of the php command.
  11. Seems like the problem is not fixed as I had to go through the same process todaySee above for results for the php command
  12. Been driving myself a bit crazy trying to determine what is filling up my Docker.img - I've already increased the size of the image way more than I should have and am still struggling (91% of a 65GB image file). I've attached the results of the container size button on the docker tab, the results of running docker system df -v, and would much appreciate any help you could offer. In my search to determine the culprit here, I ran this command du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60 which resulted in the following I then ran find /var/lib/docker/containers/ -type f -name "*.log" -delete which seems to have deleted the 45G files from the above picture, but the docker image utilization hasn't changed. I'm pretty desperate for help at this point. Edit: literally the moment I was about to submit this topic, the docker image utilization went to <15%. So less urgent, but curious about the following. Was deleting the gigantic logs above the fix? How can I avoid these files growing so large in the future? Alternatively, any thoughts as to why this magically was fixed? The only other thing I did was turn off all my dockers (and turn them back on after utilization was under control) I gather my docker image file is WAY to big, and could be problematic. What should I do here in order to preserve all my dockers/settings as is?
  13. Thank you! The docker container appears to be working correctly now, really appreciate the help.
  14. Thanks so much for replying back - I just read over the attached link (thanks for providing the resource), and can't figure out exactly how I've messed with the volume mapping up. Below I've attached screenshots of everything I see in the advanced view.
  15. Hey Binhex - thanks so much for your work. I use 6 of your dockers regularly and they are fantastic! I'm hoping you can help me with an issue I'm having with the Plexpass docker...I'm pretty much an unRAID/Docker novice (just built my first set up a few weeks ago) so I imagine it's a pretty simple thing. I set up the docker via CA, changed the settings for media and transcode directories, and can't get anything to launch. I see that it says it doesn't have access to mnt/user (which doesn't make sense to me, I gave it access), and mnt/user/Transcode also definitely exists. Any ideas? Log for: binhex-plexpass.pdf