DoesItHimself

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by DoesItHimself

  1. Edit: Did more digging and I think I figured out the actual issue, however I'm not entirely sure how to fix it or why it is an issue. It appears my cache drive somehow got added to the list of drives included in my 'Media' share and now is being including in the disks to fill when new files are brought in. The only issue I'm having is how to remove it. I checked my Media share and it only has disk 1-5 checked which are my 5 normal disks. I can't find anywhere that shows my cache as part of that share drive, however if I add a file to the cache it populates to the Media share when I browse via rootshare -> user -> Media to see the files within the share.
  2. Unfortunately I've hit a bad streak of problems lately. I had a few previous posts in the past month as well after a few years of zero issues with my server. I've had this happen a handful of times now but chalked it up to other things previously, but I've slowly ruled things out and have now gotten to this point. My log consistently gets to a full status and seems to be causing my cache drive to fill up and corrupt my docker causing me to have to rebuild it over and over. I captured diagnostics after it filled (posted) before I restart so it can clear out and get some space. I had to take run xfs_repair on a drive (disk 3) that went unmountable about a week ago, and I also re-ran parity after that and it found 30k+ errors but according to what I see it repaired that. So as far as I see everything should be healthy, but this log issue is persisting and causing issues over and over. I'm starting to think I am going to have to completely wipe everything and rebuild, but I've got 15TB of data across the array and I'd REALLY prefer to not have to work a method to back that up, as I currently don't have any way to back it up due to a long distance move. zunraid-diagnostics-20210401-2100.zip
  3. Ran it. Output is attached. I noticed the writeup said it should provide recommended next steps, however I didn't see anything. It seems like there are a few different xfs repair variants to attempt. disk 3 filesys.txt
  4. I was manually sorting through files via rootshare today and when I navigated to one specific disk it threw an error at me (some sort of I/O error) and disappear from the list of disks in the rootshare. I restarted my server and upon restart the disk is showing unmountable. It had previously acted finnicky but a restart cleared it up. The disk has been in the system it's entire life, I bought it brand new and put it in a few years ago. I tried looking through the diagnostic files (attached) but I just don't have enough background in this to sift through and find out what is going on. Unfortunately there is quite a bit of data on it that I would like to keep, as it is one of my two larger drives. It is disk 3 in my array for reference. Hoping I can find a solution that will allow me to save the data and the disk. zunraid-diagnostics-20210325-2229.zip
  5. Just updated everything on the server and I'm running into the below error. Its a paste of the log which repeats over and over as hydra tries to start. Nothing has changed on my end with the docker or any settings. I see it stating there is a corrupt config file - I've already tried to delete the container and image for it and reload from a template with no luck. Any ideas? 2021-03-16 02:46:26,300 INFO - Determined java version as '11' from version string 'openjdk version "11.0.10" 2021-01-19' 2021-03-16 02:46:26,301 INFO - Starting NZBHydra main process with command line: java -Xmx256M -DfromWrapper -XX:TieredStopAtLevel=1 -noverify -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/config/logs -Dspring.output.ansi.enabled=ALWAYS -jar /app/nzbhydra2/bin/lib/core-3.13.1-exec.jar --nobrowser --datafolder /config in folder /app/nzbhydra2/bin 02:46:27.053 [main] DEBUG org.nzbhydra.config.ConfigReaderWriter - Using temporary file /config/nzbhydra.yml.bak at [Source: (File); line: 1, column: 1] java.lang.RuntimeException: Config file /config/nzbhydra.yml corrupted. If you find a ZIP in your backup folder restore it from there. Otherwise you'll have to delete the file and start over. Please contact the developer when you have it running. at org.nzbhydra.NzbHydra.initializeAndValidateAndMigrateYamlFile(NzbHydra.java:215) at org.nzbhydra.NzbHydra.main(NzbHydra.java:114) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.boot.loader.Launcher.launch(Launcher.java:109) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) Caused by: java.lang.RuntimeException: Config file /config/nzbhydra.yml corrupted. If you find a ZIP in your backup folder restore it from there. Otherwise you'll have to delete the file and start over. Please contact the developer when you have it running. at org.nzbhydra.NzbHydra.initializeAndValidateAndMigrateYamlFile(NzbHydra.java:215) at org.nzbhydra.NzbHydra.main(NzbHydra.java:114) 2021-03-16 02:46:27,069 ERROR - Main process shut down unexpectedly. If the wrapper was started in daemon mode you might not see the error output. Start Hydra manually with the same parameters in the same environment to see it Logging wrapper output to /config/logs/wrapper.log
  6. Bumping this - I've tried a few more docker removals and re-install from profile after the Unraid 6.9.1 and docker container update for Plex itself. I'm still stuck with a server that is "currently unavailable" despite being on the same LAN and logged in directly to the IP:port combo of Plex. Hoping someone might have an answer on this. I pulled some logs via a rootshare and noted a whole bunch of 'normal' Plex activity such as renaming files, identifying that I added things, etc.
  7. Ran into an issue today after a slew of other issues that a lot of googling and other troubleshooting has not solved. My Plex container is up and running but is unreachable (all on the same network via LAN). My server went thru a long move, restarted fine, but after a few weeks a drive randomly disabled and yesterday I had to remove it from the array, add it back and rebuild from parity. I had updated to 6.9 a day or two before the drive issue. I also had a weird issue where my docker image got full, but ended up clearing itself out after the rebuild. Now I'm running into the above issue trying to reach my server from any of my usual devices (Roku, web app / webGUI). Nothing seems to get to it even if I am logged in directly to the IP:port combo. I've run through all of the Plex official troubleshooting, deleted my docker image and rebuilt it via saved container templates but no luck. Unfortunately its been quite some time since I've had hands on the server due to the move, and I'm a bit out of my depth at this point in troubleshooting. I tried to pull the container logs but they just listed a few startup items with no errors (log attached.) I don't seem to have access to the actual Plex logs, assuming this is because I cannot actually get to my server. plexlog.txt
  8. As the title states, I had a drive unexpectedly disabled - today and I'm having trouble getting things back to a working status. Here's a quick rundown of the events and steps I've taken: Long Distance Move Re-deploy server, update, everything seems fine (2 weeks runtime) Short move via vehicle, re-deploy and update to newest OS version (6.9) Server updates, runs for approx. 24 hours with no issues Return today to an disabled drive, lots of errors SMART test says the drive has no issues Google a lot Stop array, umount drive, start array. Stop array, remount drive, restart array, let parity rebuild start Docker service is now unavailable and cannot be started (tried turning on/off) Separate of this - I am waiting on my Unifi equipment to arrive and the container did spin up and threw me a whole bunch of warnings about the controller data partition filling up that seem to be along the same timeline as the drive going down. Parity rebuild is ongoing but I cannot get the docker service to restart, but I did see a thread recommending uploading the diagnostics zip (attached) to try and see if some of the pros might be able to nail down what is going on. Edit - Also took a look and just noticed my cache drive is 100% full. Assumption is it might be because I was pulling in media updates that were missed while the server was down and when the drive went down it couldn't write so it simply filled up the cache with media and logs waiting to be pushed over. zunraid-diagnostics-20210303-2130.zip
  9. System setup - Win 10 Pro laptop, Unraid server Unfortunately it's my turn to jump on the share folder issues. The username matching fix helped me out early only but now I'm running into problems. Mine's a bit different than the one's I've read so far though. Recently setup my Unraid server and used a lot of SIO guides including the 'ultimate rootshare' setup. I did not realize at the time but I never had a working (permission wise) 'Media' share via the rootshare or the self-setup option. I was having permission issues trying to use TinyMediaManager on the folders within Media share and thought nothing of it originally and charged forward. Fast forward to today - I restarted the server and my 'ultimate rootshare' and personally setup shares failed to reappear. I troubleshot for an hour before a restart of my laptop solved that. Frustrating to say the least. Now I have discovered that I specifically cannot write to my 'Movies' or 'TV' folder within my 'Media' share. All of my other shared folders, including the parent 'Media' itself, work just fine - appdata, download, isos, etc. I created test folders in a handful of other shares to verify and I have no issues. Its specifically trying to write/modify within those two. Hoping @Frank1940 or some of the pros might know the obvious answer I'm missing here.
  10. As title says - My docker has crept to 90% full (of 20GB). I have run the following troubleshooing steps with no success: 1. Verified download clients are correctly mapped. It has been active and would have filled it multiple times over (so much so that my cache was getting utilization warnings until the files were moved by Sonarr/Radarr). 2. Checked log files via du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60 3. Tried this command to see if anything stood out docker ps -s The only things that stand out to me after some more digging were some processes I found with cAdvisor. Sonarr/Radarr had processes called 'nobody' that were sitting in the ~1.7GB-2.2GB range and I have no clue what they are or why they are there. The files are moving and naming correctly and I haven't seen any issues with them staying on the cache. I also installed Pi-Hole today and saw a handful of 202.70MB processes called 'sshd' that in total were north of 1GB. Same deal, no clue what they are or how to minimize that. Hoping some of these things may ring a bell with someone here.
  11. I've been having issues with write permissions. qBittorrent has been able successfully receive files from Radarr/Sonarr, however it will never start a download. Logs confirm permission denied. Deluge works and has the exact same volume mapping. I have tried the following steps with no success: 1. Used default PUID/PGID 2. Used PUID/PGID that successfully work with Deluge (exact same path as Deluge) 3. Used the 'id (user)' command and input the PUID/PGID (values shown in screenshot) Regardless of what values I use, it will not write.