kl0wn

Members
  • Content Count

    56
  • Joined

  • Last visited

Community Reputation

0 Neutral

About kl0wn

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I tried a few times - no dice. 8GB of RAM. No errors. I'll give the manual method a shot this weekend. Thanks, bud.
  2. I'm stuck on "Reboot Now" it spins for a bit then throws a 504 Gateway error. I also tried to reboot via the typical "Reboot" button, no dice. I SSH'd in and was able to reboot but it didn't update to the 6.9.2 and still shows the "Reboot Now" banner. Any thoughts?
  3. Changing the regkey to 1 worked for me. Thanks!
  4. Howdy Gents, I am trying to setup an AVAHI Daemon for mDNS, Bonjour etc. across multiple network segments. I figured the easiest way to do this would be via a docker container but I haven't had any success finding an UNRAID template. Has anyone else done this via a docker container? Maybe I could just leverage the AVAHI Daemon that's in UNRAID? Open to suggestions but note that I have no additional hardware for AVAHI - this would need to run on the host or docker container. Thanks!
  5. Hey Gents, I am trying to setup an AVAHI Daemon for mDNS, Bonjour etc. across multiple network segments. I figured the easiest way to do this would be via a docker container but I haven't had any success finding an UNRAID template. Has anyone else done this via a docker container? Maybe I could just leverage the AVAHI Daemon that's in UNRAID? Open to suggestions but note that I have no additional hardware for AVAHI - this would need to run on the host or docker container. Thanks!
  6. The issue popped up again, so I submitted a bug and rolled back to 6.5.3, everything is now stable....so it's definitely something going on with that version. I'll hang out in 6.5.3 land
  7. No offense taken my friend lol. I know that I DEFINITELY need a better/beefier box but it's just not in the cards right now. I could up the RAM but I don't want to dump funds into an old box that will eventually be upgraded to a platform that won't even support the RAM from this one. After reboot, my memory is at 37% so something was definitely hung. I do however plan to up the size of my Cache drive, that way I can just kick off Mover every morning at say 2AM rather than having it run every hour. Thanks for the input bud.
  8. I'm starting to think this has to do with Mover causing the IOWAIT. I changed this to run every 4 hours, rather than every 1 hour and enabled logging. I'll report back with what I find. If anyone has other ideas, please let me know. EDIT: I found that my pihole docker, that was writing to a cache that was set to ONLY use the cache drive, somehow had files living on every disk in my environment....not sure how that's possible but it happened. I set the share to Cache Prefer --> Ran Mover --> All files were moved back to cache. I now switched the share back to Cache only -->
  9. There it is...top with a screen of Unraid showing 100%
  10. The Transcoder is going to fluctuate all day long but you're right 344% is a bit much haha. I've played around with Docker pinning but Plex seems to leak into other cores/ht regardless of what is set. I'll see what the Transcoder shows the next time this happens but I have 6-7 streams (some of those being transcodes) running every night with no issues. This is only happening in the morning so it would be nice to see a more verbose log output to identify what is kicking off or possibly causing this to happen.
  11. I forgot to mention I did login to check TOP and nothing was jumping out at me as unusually high, which makes this thing even more confusing. I did notice that there was an IOWAIT the first time I had to bounce the box, which leads me to believe that there are some IO Operations hanging, thus causing the Kernel to go crazy. This did not happen prior to 6.6 so I'm wondering what changes were made that could cause this. Here is a screen of TOP when everything is normal, which is basically a mirror image of what it looks like when things are going haywire...
  12. Ever morning my processor goes to 100% and just hangs there for hours until I reboot the server. This was never an issue before and is now a daily thing....the logs are giving me jack for troubleshooting Any thoughts?: Oct 22 20:00:16 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 22 20:53:04 Tower sshd[13608]: SSH: Server;Ltype: Kex;Remote: 192.168.2.109-10539;Enc: chacha20-poly1305@openssh.com;MAC: <implicit>;Comp: zlib@openssh.com Oct 22 21:00:01 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &&g