Jump to content

kl0wn

Members
  • Content Count

    53
  • Joined

  • Last visited

Everything posted by kl0wn

  1. Changing the regkey to 1 worked for me. Thanks!
  2. Howdy Gents, I am trying to setup an AVAHI Daemon for mDNS, Bonjour etc. across multiple network segments. I figured the easiest way to do this would be via a docker container but I haven't had any success finding an UNRAID template. Has anyone else done this via a docker container? Maybe I could just leverage the AVAHI Daemon that's in UNRAID? Open to suggestions but note that I have no additional hardware for AVAHI - this would need to run on the host or docker container. Thanks!
  3. Hey Gents, I am trying to setup an AVAHI Daemon for mDNS, Bonjour etc. across multiple network segments. I figured the easiest way to do this would be via a docker container but I haven't had any success finding an UNRAID template. Has anyone else done this via a docker container? Maybe I could just leverage the AVAHI Daemon that's in UNRAID? Open to suggestions but note that I have no additional hardware for AVAHI - this would need to run on the host or docker container. Thanks!
  4. The issue popped up again, so I submitted a bug and rolled back to 6.5.3, everything is now stable....so it's definitely something going on with that version. I'll hang out in 6.5.3 land
  5. No offense taken my friend lol. I know that I DEFINITELY need a better/beefier box but it's just not in the cards right now. I could up the RAM but I don't want to dump funds into an old box that will eventually be upgraded to a platform that won't even support the RAM from this one. After reboot, my memory is at 37% so something was definitely hung. I do however plan to up the size of my Cache drive, that way I can just kick off Mover every morning at say 2AM rather than having it run every hour. Thanks for the input bud.
  6. I'm starting to think this has to do with Mover causing the IOWAIT. I changed this to run every 4 hours, rather than every 1 hour and enabled logging. I'll report back with what I find. If anyone has other ideas, please let me know. EDIT: I found that my pihole docker, that was writing to a cache that was set to ONLY use the cache drive, somehow had files living on every disk in my environment....not sure how that's possible but it happened. I set the share to Cache Prefer --> Ran Mover --> All files were moved back to cache. I now switched the share back to Cache only --> Invoked Mover --> No crazy spike in CPU. My theory is when Mover was invoked it was touching all of the drives, thus causing the IOWAIT. I may be totally wrong but it's the best I got for now.
  7. There it is...top with a screen of Unraid showing 100%
  8. The Transcoder is going to fluctuate all day long but you're right 344% is a bit much haha. I've played around with Docker pinning but Plex seems to leak into other cores/ht regardless of what is set. I'll see what the Transcoder shows the next time this happens but I have 6-7 streams (some of those being transcodes) running every night with no issues. This is only happening in the morning so it would be nice to see a more verbose log output to identify what is kicking off or possibly causing this to happen.
  9. I forgot to mention I did login to check TOP and nothing was jumping out at me as unusually high, which makes this thing even more confusing. I did notice that there was an IOWAIT the first time I had to bounce the box, which leads me to believe that there are some IO Operations hanging, thus causing the Kernel to go crazy. This did not happen prior to 6.6 so I'm wondering what changes were made that could cause this. Here is a screen of TOP when everything is normal, which is basically a mirror image of what it looks like when things are going haywire...
  10. Ever morning my processor goes to 100% and just hangs there for hours until I reboot the server. This was never an issue before and is now a daily thing....the logs are giving me jack for troubleshooting Any thoughts?: Oct 22 20:00:16 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 22 20:53:04 Tower sshd[13608]: SSH: Server;Ltype: Kex;Remote: 192.168.2.109-10539;Enc: chacha20-poly1305@openssh.com;MAC: <implicit>;Comp: zlib@openssh.com Oct 22 21:00:01 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 22 22:00:16 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 22 23:00:10 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 00:00:01 Tower Plugin Auto Update: Checking for available plugin updates Oct 23 00:00:07 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 00:00:28 Tower Plugin Auto Update: Community Applications Plugin Auto Update finished Oct 23 01:00:07 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 02:00:16 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 03:00:07 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 04:00:14 Tower root: /mnt/cache: 8 GiB (8576638976 bytes) trimmed Oct 23 04:00:14 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 05:00:16 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 06:00:07 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 07:00:16 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 08:00:16 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 09:00:16 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 23 10:00:07 Tower crond[1673]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
  11. EDIT: I'm a moron and didn't read your entire post. My balance took around 15 minutes. To be fair I never ran the operation and my SSD was in operation for 4 years so I'm unsure if that effects anything. However, I'm guessing yours may take a bit longer due to the size of the SSD.
  12. Balance Complete. I'll start moving data back and see if the issue pops up again. Thanks for all the help!
  13. w00t! I am now balancing with 75 and the GUI is back to normal. I'll let you guys know how it goes thanks @johnnie.black and @limetech
  14. aaaaah OK thanks for the education on that. I wasn't aware that's how it worked. I'll keep moving things off the cache and see if I can get it to balance. If I go the reformat route, should I just go with XFS? I haven't had any issues like this with that file system.
  15. If I can get this done without reformatting that'd be ideal. I have Plex metadata on this drive that's ~40G and has probably 800k plus in directories alone. Moving that to one of the disks I have in the box will take a ridonculous amount of time. Is this going to be an issue for other users after the official release? If so, is it mentioned that this need to be done?
  16. OK but what's considered fully/close to being allocated? I have 50Gigs out of 120 left. I'll move my 20G docker image off and attempt to run the balance because 1 wouldn't work either haha
  17. OK so in the meantime what should I do? Is the kernel purpose built by unraid or can I yank it and apply it myself? Also - I can't balance because it appears that is also looking at the value provided to the GUI...
  18. Single Cache Disk, never in a pool. I'm not too sure on the age...I believe I got it 3-4 years ago, so it would have been then. So the dockers and the rest of the system are looking at what the GUI is pulling? Is there a way to modify this behavior?
  19. I'm about ready to throw this box out the window...With that being said, here we go: I recently started moving all of my docker data and images to the cache drive. The data in this share existed on all disks in the RAID which was horrible for performance (Share Name: /mnt/user/cache_only). So I made the share cache only and began moving data from Disk1, Disk 2, Disk 3... to /mnt/cache/cache_only. I was successful in moving most of the appdata except for Plex. About halfway thru the transfer it said that my target had no space left, even though there was 60G worth of free space available. I aborted the process, moved some files back to other disks and started the process again. I checked the unraid console and it's showing some really odd values for the cache disk. First off this is a 120G SSD, but when this ugly bug peaks it's head out it will just slap whatever size it wants for "Disk Size" in this example it's showing 69 with 0B free. This is causing some serious issues with the dockers, cache data and I'm worried at some point (this has happend 4 times just today) that my docker data is going to get corrupt. What's even more odd is I stopped trying to transfer the remaining plex files over because of this error. However, now UnRAID is just wigging out on its own with no intervention. Screenshots and syslog attached. tower-diagnostics-20171219-1244.zip
  20. Howdy, I have scripts setup to reboot my docker containers nightly be running 'docker restart <container id goes here>'. I noticed that this wasn't occurring nightly so I dug a little deeper. I compared my scripts to the Container ID's and found that they had changed. I saw some orphaned images, and thought it was a fluke and that's what caused the container ID to change. However, I just checked again and found that my Container ID's have again changed with no orphaned images in site. Am I wrong to assume that the Container ID's should remain the same or is something wonky going on? NOTE: I also have my Docker's set to auto-update at 6AM every day. Thanks