Can0n

Members
  • Posts

    614
  • Joined

  • Last visited

Everything posted by Can0n

  1. Weird issue tonight login page was fine once i logged in just had a generic chrome page not available (tried incognito to rule out cookies) tried restarting/reloading, stopping and starting on nginx no change. connect said my server was offline. found a handy command to stop the array since i new my docker container were still running. this saved me from having to do a parity check on reboot. CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+') curl -k --data "startState=STARTED&file=&csrf_token=${CSRF}&cmdStop=Stop" http://localhost/update.htm diagnostics attached any insight is greatly appreciated thor-diagnostics-20240422-2016.zip
  2. I have had nothing but issues with Authentik since the latest update found changing the repository to this makes it stable beryju/authentik:2023.10.7
  3. Thanks @itimpi and @JorgeB I will try that my quarterly parity check just started i removed the plugin and will try it once its done in 18 hours or so
  4. just tried to manually start mover and it just refreshes the page and never start. in the log it looks like it started but does not appear to be running. Download Cache is 2 spinner 4TB drives using ZFS and main array is XFS Sep 30 23:49:22 Thor emhttpd: shcmd (3809): /usr/local/sbin/mover &> /dev/null & Sep 30 23:49:27 Thor emhttpd: shcmd (3810): /usr/local/sbin/mover &> /dev/null & Sep 30 23:50:57 Thor emhttpd: shcmd (3811): /usr/local/sbin/mover &> /dev/null & Sep 30 23:51:17 Thor emhttpd: shcmd (3812): /usr/local/sbin/mover &> /dev/null & Diagnostics attached Any help is appreciated thor-diagnostics-20230930-2351.zip
  5. Simply changing docker to ipvlan did not fix the issue Stopped docker deleted docker.img Created new one reinstalled containers Problem appears to be resolved. Interesting to note while docker was using macvlan and even after I changed it to ipvlan but it was still locking up my unifi 10GB switch would go offline and the unifi udmpro would too impacting my cameras but not my access points (all ubiquiti) Not a squak from any of the devices since fixing that docker networking had no idea it could bring my firewall and switches and other gear down.
  6. Yeah very odd wonder if its a glitch in 6.12.3 i deleted docker.img made a new one and deploying all my containers again using ipvlan i was using that previously with no issues i only went to macvlan so i could see my plex server in my unifi network devices list I digress.
  7. wierd i switched it after the first lock up on Sunday just verified it last night am i going to have to blow away docker and do it again as ipvlan?
  8. here is official diagnostics thor-diagnostics-20230814-2332.zip
  9. I did switch back to ipvlan right after the first freeze, froze up two more times last night so switched it off for the night and booted it this morning no array started and it was fine started plex and it was fine started some more containers and it froze so its frozen twice today here is my post with screenshots and logs
  10. here is what syslog captured its a pretty long read syslog for server.txt
  11. Hi Everyone I am in a bit of panic i am having a tons of stability issues with my server right now as of last night it seems it be very unstable freezing randomly after being stable for 25 days. i thought it might be my 5 year old USB so i popped it into my PC and it detected errors i repaired them and copied the data to a much newer usb. inserted and booted up with array stopped..transfered license over and leaft it with docker enabled, VM disabled and left all dockers stopped. Was stable all day while i worked so i fired up plex....stable for a few hours so started firing up other dockers i cant use diagnostics because it doesnt seem stable enough and its a hard freeze too so if any one has ideas i was able to get one screenshot with my phone as it froze once only i am suspecting a CPU or memory issue but the memtest+ is not working it just boots unraid it froze three times last night so i left the server off for the night, fired it up today left it with array unstarted until i remoted in from work and started it making sure all dockers stayed off. attached is the last screen on saw on the first of two freezes tonight. please help ! this server contains a lot of data i cant lose and im hoping its a simple fix. the Motherboard is an Asus with the latest bios and plugins and containers are all updated vm were disabled as well CPU is i7 10700k, 64 GB Corsair DDR4 ram stock speeds and asus PRIME Z590-V motherboard not thinking there is any damage to the data just possible RAM, CPU or pretty new motherboard
  12. its ok i got them all manually removed now im dealing with freezing on my server...time to hit up another menu to get help I get a CPU panic and it freezes right after
  13. well still not sure how they showed up, but once i started clearing them one at a time with the -R i i saw the massive list drop very quickly as i did them almost done
  14. yeah script was never known by me when i played with docker directory but once i got docker work once converting to ZFS from BTRFS using the docker.img file i wanted to create datasets so found the script this is my docker setup since using the script
  15. I never ran the script when trying docker directory thats whats odd about it I hadn't looked at the datasets in a while server was up 25 days with no issues just noticed it today then my server started locking up so spent time diagnosing that and its running better now so thought id look into why and how these all showed up
  16. without wild cards its going to take a while looks like its all cache/randomstring and there is a lot more than in this screenshot possibly multiple hundreds not related to my correct datasets (appdata, domains, system, isos) here is a smaller screenshot showing cache/domains it seems what ever happened created all these in the cache root so hopefully someone who knows ZFS better might be able to point me to a equivalent to ZFS Destroy -R cache/0* type command
  17. there are hundreds i did use the script to create the ones via docker.img not sure when it would have run to create them when DockerDirectory was tested. no way to mass remove all but the ones i need (appdata and domains) I mean its very massive list
  18. looks like the dockerdirectory dataset is still there from when i was playing around with ZFS and docker but im actually usign BTRFS file system for docker.img...i manually delete the docker directory but the ZFS master plugin still showing all the super long strings and not the actual docker container names like it used it
  19. Hello I just happened to notice today all my containers (which were converted to datasets using SpaceInader1's script) are listed completely scrambled and there are way more than there should be. advice/support please?
  20. Using Intel 10th gen CPU so don't think it's related I can't actually turn the hardware transcode off I have a LOT of family and friends on it almost all the time. Without hardware transcoding I can handle 3-4 streams. With hardware I have had 40 with 25 transcoding at once. Since removing modprobr from go file and switching to the spinner cache pool things have been much more stable One outage that corrected itself less than 2 min later.
  21. Updated BIOS Commented out the modprobe and Ram Drive change log size to 512MB deleted docker image and reset up again using docker.img in BTRFS format re-installed all containers and set plex to use my download-cache (two spinner 4TB disks in ZFS) Ill see how stable it is and report back on a server reboot it typically takes less than 30 hours before i started seeing issues
  22. interesting enough looks like my docker.img file corrupted all my containers went to orphaned images I am deleting the docker.img file and re-doing it
  23. Hi thanks there is a lot its an older server for sure (over 5 years) I swear i uninstalled GVT-g alreadt...interesting....i removed it again I have commented out the mod probe and tmp ram drive for Plex in the go file that was how i got hardware transcoding through iQSV when I first got this server up and running back on unRAID 6.4 (i believe it was) it has always been fine until the move to ZFS. the pulseway logs have baffled me but have been that way for 2 years just have not had the time to troubleshoot I suspect that were what prompted me to increase the log size to 1GB I tried to do a force update to get a docker run displayed but it failed and created an orphan image now i get get it installed back from apps IMAGE ID [plexpass]: Pulling from plexinc/pms-docker. IMAGE ID [a70d879fa598]: Already exists. IMAGE ID [c4394a92d1f8]: Already exists. IMAGE ID [10e6159c56c0]: Already exists. IMAGE ID [d1042fe57e96]: Already exists. IMAGE ID [ac5317c7b384]: Already exists. IMAGE ID [47414e89d67b]: Pulling fs layer. Downloading 100% of 152 B. Verifying Checksum. Download complete. Extracting. TOTAL DATA PULLED: 152 B Error: failed to register layer: stat /var/lib/docker/btrfs/subvolumes/e88f3b8d60610253638260b5aecc05dc96223fc700c87f6fe077e6c1e9345215: no such file or directory Ill take a look at the bios it is a rather newer motherboard and it was on my list of things to do
  24. yeah i think you are right....ZFS has been a diaster for me might have to move it all back to array and format as xfs have not use that for cache yet Im using the offical plex contain which one are you using, i was previously using the binhex:plexpass for years with no issues other than delayed updates
  25. No, I'm sorry to say my Plex is still completely unstable after going to 6.12.3 and adding the extra parameters I had to restart it three times in a 20 minute span again I tried Docker directory and my reverse proxy failed to work. I tried an XFS docker image file and my reverse proxy failed to work. Went back to the BTRFS Docker image and everything fires up again, but I am afraid it's probably not going to be very stable again. I've already got a post on the bug reports section for 6.12.3 with my diagnostics and the Plex Docker log is useless. It doesn't say anything is wrong