• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About DoesItHimself

  • Rank
  1. Edit: Did more digging and I think I figured out the actual issue, however I'm not entirely sure how to fix it or why it is an issue. It appears my cache drive somehow got added to the list of drives included in my 'Media' share and now is being including in the disks to fill when new files are brought in. The only issue I'm having is how to remove it. I checked my Media share and it only has disk 1-5 checked which are my 5 normal disks. I can't find anywhere that shows my cache as part of that share drive, however if I add a file to the cache it populates to the Media share when I
  2. Unfortunately I've hit a bad streak of problems lately. I had a few previous posts in the past month as well after a few years of zero issues with my server. I've had this happen a handful of times now but chalked it up to other things previously, but I've slowly ruled things out and have now gotten to this point. My log consistently gets to a full status and seems to be causing my cache drive to fill up and corrupt my docker causing me to have to rebuild it over and over. I captured diagnostics after it filled (posted) before I restart so it can clear out and get some space. I had
  3. Ran it. Output is attached. I noticed the writeup said it should provide recommended next steps, however I didn't see anything. It seems like there are a few different xfs repair variants to attempt. disk 3 filesys.txt
  4. I was manually sorting through files via rootshare today and when I navigated to one specific disk it threw an error at me (some sort of I/O error) and disappear from the list of disks in the rootshare. I restarted my server and upon restart the disk is showing unmountable. It had previously acted finnicky but a restart cleared it up. The disk has been in the system it's entire life, I bought it brand new and put it in a few years ago. I tried looking through the diagnostic files (attached) but I just don't have enough background in this to sift through and find out what is going o
  5. Just updated everything on the server and I'm running into the below error. Its a paste of the log which repeats over and over as hydra tries to start. Nothing has changed on my end with the docker or any settings. I see it stating there is a corrupt config file - I've already tried to delete the container and image for it and reload from a template with no luck. Any ideas? 2021-03-16 02:46:26,300 INFO - Determined java version as '11' from version string 'openjdk version "11.0.10" 2021-01-19' 2021-03-16 02:46:26,301 INFO - Starting NZBHydra main pr
  6. Bumping this - I've tried a few more docker removals and re-install from profile after the Unraid 6.9.1 and docker container update for Plex itself. I'm still stuck with a server that is "currently unavailable" despite being on the same LAN and logged in directly to the IP:port combo of Plex. Hoping someone might have an answer on this. I pulled some logs via a rootshare and noted a whole bunch of 'normal' Plex activity such as renaming files, identifying that I added things, etc.
  7. Ran into an issue today after a slew of other issues that a lot of googling and other troubleshooting has not solved. My Plex container is up and running but is unreachable (all on the same network via LAN). My server went thru a long move, restarted fine, but after a few weeks a drive randomly disabled and yesterday I had to remove it from the array, add it back and rebuild from parity. I had updated to 6.9 a day or two before the drive issue. I also had a weird issue where my docker image got full, but ended up clearing itself out after the rebuild. Now I'm running in
  8. As the title states, I had a drive unexpectedly disabled - today and I'm having trouble getting things back to a working status. Here's a quick rundown of the events and steps I've taken: Long Distance Move Re-deploy server, update, everything seems fine (2 weeks runtime) Short move via vehicle, re-deploy and update to newest OS version (6.9) Server updates, runs for approx. 24 hours with no issues Return today to an disabled drive, lots of errors SMART test says the drive has no issues Google a lot Stop array, umount drive, start array.
  9. System setup - Win 10 Pro laptop, Unraid server Unfortunately it's my turn to jump on the share folder issues. The username matching fix helped me out early only but now I'm running into problems. Mine's a bit different than the one's I've read so far though. Recently setup my Unraid server and used a lot of SIO guides including the 'ultimate rootshare' setup. I did not realize at the time but I never had a working (permission wise) 'Media' share via the rootshare or the self-setup option. I was having permission issues trying to use TinyMediaManager on the folders within Media sha
  10. As title says - My docker has crept to 90% full (of 20GB). I have run the following troubleshooing steps with no success: 1. Verified download clients are correctly mapped. It has been active and would have filled it multiple times over (so much so that my cache was getting utilization warnings until the files were moved by Sonarr/Radarr). 2. Checked log files via du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60 3. Tried this command to see if anything stood out docker ps -s The only things that stand out to me after some more
  11. I've been having issues with write permissions. qBittorrent has been able successfully receive files from Radarr/Sonarr, however it will never start a download. Logs confirm permission denied. Deluge works and has the exact same volume mapping. I have tried the following steps with no success: 1. Used default PUID/PGID 2. Used PUID/PGID that successfully work with Deluge (exact same path as Deluge) 3. Used the 'id (user)' command and input the PUID/PGID (values shown in screenshot) Regardless of what values I use, it will not write.