glenner

Members
  • Posts

    38
  • Joined

  • Last visited

Everything posted by glenner

  1. Thanks Delarius. That's a brilliant solution... I'll go with the symbolic link. Thanks, -Glenner.
  2. Hi, I've noticed that my Logitech Media Server docker creates a log file that just balloons out of control. At one point it got to 80GB+ on my cache and caused my whole system to go unstable (see https://lime-technology.com/forums/topic/73395-all-my-dockers-are-missing-please-help) Right now, it has gotten to 3GB in 2 weeks and is growing. See server.log below: root@unraid:/mnt/user/appdata/LogitechMediaServer/logs# ls -al --block-size=M total 3337M drwxrwxrwx 1 nobody users 1M Aug 14 22:41 ./ drwxrwxrwx 1 nobody users 1M Jul 17 2017 ../ -rw-rw-rw- 1 nobody users 0M Jul 17 2017 perfmon.log -rw-rw-rw- 1 nobody users 1M Jul 19 2017 scanner.log -rw-rw-rw- 1 nobody users 3337M Aug 31 18:04 server.log -rw-rw-rw- 1 nobody users 1M Jul 18 2017 spotifyfamily1d.log It is full of these errors (thousands of them continuously coming out): [18-08-16 00:43:56.6065] Slim::Utils::Misc::msg (1252) Warning: [00:43:56.6063] EV: error in callback (ignoring): Can't call method "display" on an undefined va lue at /usr/share/perl5/Slim/Display/Lib/TextVFD.pm line 157. [18-08-16 00:43:56.7562] Slim::Utils::Misc::msg (1252) Warning: [00:43:56.7561] EV: error in callback (ignoring): Can't call method "display" on an undefined va lue at /usr/share/perl5/Slim/Display/Lib/TextVFD.pm line 157. Has anyone seen this error (google did not find much)? Does anyone know how I can limit the size of the log, either just for this docker or for all dockers? Thanks, -Glen.
  3. So last week after my system seemed to become stable again once I deleted my runaway LMS log file, I did run the balance, and then I upgraded everything over the weekend. I updated my bios, unraid 6.5.3, dockers, plugins, and recreated my appdata backups. I also found some new dockers I needed and set those up too... :-) btrfs stats are stil clean. I did a quick sanity check after each upgrade step to ensure the system was still stable... My system is up to date now. I did have to restore my plex db... as that did get corrupted in the initial outage. Fortunately, plex keeps dated backups under appdata and so that's an easy fix. I don't see my instability issues returning (missing dockers, missing VMs, errors).... at least anytime soon. I've had unraid for a year now, which I setup on a new custom pro build I bought last year. It's been solid and much better than the Windows box I used to run all my HTPC stuff on... The only issues I've seen over the last year that resulted in any kind of "outage" happened if some cache file gets huge and out of control. I saw it a while ago with a 100GB+ SageTV recording that brought down my whole server (https://forums.sagetv.com/forums/showthread.php?t=64895). And now I've seen it more recently with a huge Logitech Media Server log file that also effectively brought down my server. As best as I can tell interactions between the btrfs cache, mover settings, environment settings, and huge files can lead to issues. Once a cache file gets huge and there is "no space" left on the btrfs cache, and all the while a docker is actively attempting to write 20GB/hr to the cache, then all bets are off... I'd like to find a way to get some kind of alert if I have a huge file brewing, or excess disk usage in my cache. That might have averted all the problems I've had so far. I should never have say a 20GB+ file on the system (some SageTV recordings like a 3 hour sports program could hit 20GB before being moved off to the array, but that's the biggest file I ever want to see on the cache). Not sure if there is a plugin for that (maybe "Fix Common Problems" could scan for that), but will see if I can find something, or setup some kind of automated file size scan in the cron. In any event, I'm on 6.5.3 and I think super stable again... Thanks for your help. I really appreciate it.
  4. Thanks for the help on this upgrade trurl. I've confirmed everything is up and running smoothly on 6.5.3. Have all dockers and plugins updated and appdata backed up again. Cheers.
  5. Thanks a bunch! You're super helpful, and clearly good luck, as I'm up on 6.5.3 with no issues so far... Dockers and VMs are up, and I'm trying to take a tour of what's new. I'm going to reinstall my plugins now, assuming they are all still apply in 6.5.3. I wonder if some plugins that were useful in 6.3.5 may no longer apply or be needed a year later in 6.5.3. Or maybe some of them have been sucked into the base OS. Community Apps unassigned devices tips and tweaks (see screenshots for config) CA backup/restore dynamix system stats dynamix system information dynamix system temperature dynamix system buttons dynamix ssd trim I needed tips and tweaks in 6.3.5 to fix some cache parameters or I would get EOM errors (see screencap). That was covered here in this thread. I'm going to assume I still need these disk cache settings in 6.5.3. I also was running dynamix SSD trim daily. I'll assume I need to reenable that too.
  6. I actually cannot find dynamix.plg on my /boot flash... Here are some commands I just ran... Can't find that file anywhere.. root@unraid:/boot/config/plugins# ls -al total 112 drwxrwxrwx 14 root root 4096 Aug 17 17:20 ./ drwxrwxrwx 6 root root 4096 Aug 17 15:16 ../ drwxrwxrwx 3 root root 4096 Mar 15 18:55 NerdPack/ -rwxrwxrwx 1 root root 8138 Mar 15 18:55 NerdPack.plg* drwxrwxrwx 2 root root 4096 Aug 15 10:10 ca.cleanup.appdata/ -rwxrwxrwx 1 root root 2679 Aug 15 10:10 ca.cleanup.appdata.plg* drwxrwxrwx 3 root root 4096 Mar 15 18:51 ca.update.applications/ -rwxrwxrwx 1 root root 4352 Mar 15 18:51 ca.update.applications.plg* drwxrwxrwx 5 root root 4096 Jun 30 2017 dockerMan/ drwxrwxrwx 3 root root 4096 Aug 17 15:37 dynamix/ drwxrwxrwx 2 root root 4096 Apr 24 23:02 dynamix.apcupsd/ drwxrwxrwx 2 root root 4096 Jul 4 2017 dynamix.vm.manager/ drwxrwxrwx 3 root root 4096 Aug 17 14:50 fix.common.problems/ -rwxrwxrwx 1 root root 12268 Aug 15 10:11 fix.common.problems.plg* drwxrwxrwx 2 root root 4096 Aug 15 10:11 preclear.disk/ -rwxrwxrwx 1 root root 14622 Aug 15 10:11 preclear.disk.plg* drwxrwxrwx 2 root root 4096 Jun 30 2017 statistics.sender/ drwxrwxrwx 2 root root 4096 Aug 17 15:25 unassigned.devices/ drwxrwxrwx 3 root root 4096 Aug 15 10:13 user.scripts/ -rwxrwxrwx 1 root root 5854 Aug 15 10:13 user.scripts.plg* root@unraid:/boot/config/plugins# find . -name dynamix.plg root@unraid:/boot/config/plugins# find . -name dynamix.cfg ./dynamix/dynamix.cfg root@unraid:/boot/config/plugins# ls -al dynamix total 44 drwxrwxrwx 3 root root 4096 Aug 17 15:37 ./ drwxrwxrwx 14 root root 4096 Aug 17 17:20 ../ -rwxrwxrwx 1 root root 145 Feb 19 16:49 docker-update.cron* -rwxrwxrwx 1 root root 866 Aug 17 15:37 dynamix.cfg* -rwxrwxrwx 1 root root 116 Feb 19 16:49 monitor.cron* -rwxrwxrwx 1 root root 30 Aug 17 13:24 monitor.ini* -rwxrwxrwx 1 root root 73 Aug 17 14:40 mover.cron* -rwxrwxrwx 1 root root 88 Aug 6 2017 parity-check.cron* -rwxrwxrwx 1 root root 138 Feb 19 16:49 plugin-check.cron* -rwxrwxrwx 1 root root 120 Feb 19 16:49 status-check.cron* drwxrwxrwx 2 root root 4096 Jun 30 2017 users/
  7. When I did the "plugin update check" just now, it clearly shows me my 8 .plg files and they can be mapped to my 8 plugins in the UI. "dynamix.plg" is the "Dynamix webGUI" built-in plugin. I don't remember installing this myself, so I'm guessing if must be part of 6.3.5. You think that's safe to manually delete from the flash config folder? And so the steps would be? Delete the dynamix.plg from flash. Reboot Update unraid server OS plugin to 6.5.3 Reboot
  8. Ok.. so I've removed a whole bunch of plugins I had on my 6.3.5 system: unassigned devices all dynamix plugins community apps CA backup/restore tips and tricks, etc. But I still have 1 error from the update assistant: Checking for plugin updatesIssue Found: dynamix.plg (dynamix) is not up to date. It is recommended to update all your plugins.Checking for plugin compatibilityIssue Found: dynamix.plg (dynamix) is not known to Community Applications. Compatibility for this plugin CANNOT be determined and it may cause you issues. I don't have any dynamix plugins left, except for the dynamix webgui plugin, but that is "built-in" and so can't remove it... All I have left are the 8 plugins shown in this in the screencap. Any thoughts on how I can clear this last error?
  9. Thanks trurl! I'm uninstalling any plugins that are bothersome... And so just to be clear, I just stop the array and update the unraid OS plugin to 6.5.3? I don't need to enter "maintenance mode"? Was trying to find some explicit instructions... looks like you just stop the array. This is my first unraid OS upgrade, so hoping this goes smooth :-). Thanks.
  10. Hi. I'm trying to move to the latest 6.5.3 unraid OS today. Before I do the upgrade, I've been trying to clean up some things. I've upgraded my bios, updated my dockers, and plugins... There are some plugins I cannot update because they prereq a higher version of unraid. I ran the update assistant and get a few "issues". See below. Do I have to worry about these plugin "issues found"? Or am I good to go with this upgrade? Thanks for any guidance! Disclaimer: This script is NOT definitive. There may be other issues with your server that will affect compatibility.Current unRaid Version: 6.3.5 Upgrade unRaid Version: 6.5.3Checking cache drive partitioningOK: Cache drive partition starts on sector 64Checking for plugin updatesIssue Found: community.applications.plg (community.applications) is not up to date. It is recommended to update all your plugins.Issue Found: dynamix.plg (dynamix) is not up to date. It is recommended to update all your plugins.Issue Found: dynamix.system.stats.plg (dynamix.system.stats) is not up to date. It is recommended to update all your plugins.Issue Found: tips.and.tweaks.plg (tips.and.tweaks) is not up to date. It is recommended to update all your plugins.Issue Found: unassigned.devices.plg (unassigned.devices) is not up to date. It is recommended to update all your plugins.Checking for plugin compatibilityIssue Found: ca.backup.plg (unassigned.devices) is deprecated for ALL unRaid versions. This does not necessarily mean you will have any issues with the plugin, but there are no guarantees. It is recommended to uninstall the pluginIssue Found: dynamix.plg (unassigned.devices) is not known to Community Applications. Compatibility for this plugin CANNOT be determined and it may cause you issues.Checking for extra parameters on emhttpOK: emhttp command in /boot/config/go contains no extra parametersChecking for zenstates on Ryzen CPUOK: Ryzen CPU not detectedChecking for disabled disksOK: No disks are disabledChecking installed RAMOK: You have 4+ Gig of memoryChecking flash driveOK: Flash drive is read/writeChecking for valid NETBIOS nameOK: NETBIOS server name is compliant.Checking for ancient version of dynamix.plgOK: Dynamix plugin not foundChecking for VM MediaDir / DomainDir set to be /mntOK: VM domain directory and ISO directory not set to be /mntChecking for mover logging enabledMover logging is enabled. While this isn't an issue, it is now recommended to disable this setting on all versions of unRaid. You can do this in Settings - Schedule - Mover Schedule.Issues have been found with your server that may hinder the OS upgrade. You should rectify those problems before upgrading
  11. Status update: It looks like deleting the runaway Logitech log file and thereby freeing up huge amount of space on my cache has fixed my server. Here is what it looks like on my end: As I said in my last post, I stopped all the dockers last night and started making /mnt/cache backups. While using mc, I noticed it was taking a long time to copy over this file: /mnt/cache/appdata/LogitechMediaServer/logs/server.log I checked the file and found it to be a year's worth of obscenely verbose logging by LMS and the file was 85GB. I trashed the file and redid the cache backup. My cache usage has since dropped to 62GB out of 250GB used. I have not run the btrfs balance at this point. I rebooted and started the array. All of my dockers were immediately back up and running. My Windows 10 VM is also back and up and running. I have zero errors when running: btrfs dev stats /mnt/cache I did not rebuild my cache, docker.img, or VM. I only deleted the one 85GB log file and rebooted... I'm not clear that my docker image or VM are corrupted... They don't appear to be as best as I can tell. I checked the syslog, and app logs and don't see anything amiss. No errors that I can see... I've since updated a few plugins and dockers... stopped and started dockers from the UI. It all works. I've posted my latest diags... I'm pretty sure all of the issues I have had, including the write errors, are a result of the one out of control log file, and the way btrfs cache seems to operate in this particular situation where it thinks there is no space left for some reason (even though I should have still had 90GB free even with the massive log file present). I'll continue monitoring it over the next while to see if anything changes, but it looks pretty clear to me that this is what has happened in this case. unraid-diagnostics-20180815-2124.zip
  12. Thanks Johnnie. I have not run the rebalance just yet... I will run the balance after I make some backups of my cache. I'll try: btrfs balance start -dusage=75 /mnt/cache Right now, I've been taking screencaps of as much of my docker and system config as possible, in case I need to more fully rebuild my whole system for some reason. I'm also trying to use mc, rsync, and CA backup/restore to create a backup of /mnt/cache. You can never have too many backups at times like this. I also want to see what crashplan may have backed up for me if I can get that docker up. I do have a crashplan backup on my array, but can't tell what's on it. Note to myself: crashplan backup on the array is not very useful if the crashplan docker is offline and unusable. The first thing I noticed is that Logitech Media Center has a log file that is out of control. A year's worth of logging has produced an 85GB file, or more than half of my stated used cache. I've trashed the log. I'll need to figure out how to limit that log going forward. Damn! Ideally this kind of runaway file should just not be allowed, or maybe an alert could be triggered? Will need to look at that... But now I'm wondering if this runaway log could trigger most all of the issues I've been having, including the write errors? root@unraid:/mnt/cache/appdata/LogitechMediaServer/logs# ls --block-size=M -al total 84545M drwxrwxrwx 1 nobody users 1M Jul 19 2017 ./ drwxrwxrwx 1 nobody users 1M Jul 17 2017 ../ -rw-rw-rw- 1 nobody users 0M Jul 17 2017 perfmon.log -rw-rw-rw- 1 nobody users 1M Jul 19 2017 scanner.log -rw-rw-rw- 1 nobody users 84545M Aug 13 01:05 server.log -rw-rw-rw- 1 nobody users 1M Jul 18 2017 spotifyfamily1d.log
  13. Thanks Johnnie. This is what I've been seeing in the 24 hours after resetting the cache disk error stats: 1. I don't have any new errors since. root@unraid:/mnt# btrfs dev stats /mnt/cache [/dev/nvme0n1p1].write_io_errs 0 [/dev/nvme0n1p1].read_io_errs 0 [/dev/nvme0n1p1].flush_io_errs 0 [/dev/nvme0n1p1].corruption_errs 0 [/dev/nvme0n1p1].generation_errs 0 [/dev/nvme1n1p1].write_io_errs 0 [/dev/nvme1n1p1].read_io_errs 0 [/dev/nvme1n1p1].flush_io_errs 0 [/dev/nvme1n1p1].corruption_errs 0 [/dev/nvme1n1p1].generation_errs 0 2. I have 9 dockers configured and usually they would all be up. Right now I'm only running 5 dockers: sagetv, logitech media server, deluge, sickrage, and openvpn. Crashplan, duckdns, handbrake and plex are shutdown. 3. My SageTV docker recorded a bunch of shows last night (cache writes).... and I was able to watch another show simultaneously (cache and array reads depending on what I'm watching). Last night's recordings would results in at 20GB+ being written to the cache. No issues in these recordings... 4. My SageTV front end UI did slow down and become erratic while I was watching a show last night about Aug 13 23:49:03. 5. See the syslog. I start getting errors like this below. Between 20:04 (last mover run) and 23:49, SageTV is busy recording let's say ~20GB+ of prime time shows. Aug 13 20:04:16 unraid root: mover finished Aug 13 23:49:03 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 13 23:49:03 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 13 23:52:15 unraid kernel: loop: Write error at byte offset 3852955648, length 4096. Aug 13 23:52:15 unraid kernel: blk_update_request: I/O error, dev loop1, sector 7525296 Aug 13 23:52:15 unraid kernel: BTRFS error (device loop1): bdev /dev/loop1 errs: wr 433, rd 0, flush 0, corrupt 0, gen 0 Aug 13 23:52:15 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 13 23:52:15 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 13 23:54:31 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 13 23:54:31 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 13 23:57:06 unraid shfs/user: err: shfs_write: write: (28) No space left on device 6. These errors stop once I run the mover which finishes at: Aug 14 00:24:29 unraid root: mover finished The mover has moved the recently recorded shows from the cache to the array thereby clearing ~20GB from the cache. 7. I have since changed the mover to run hourly in order to keep the cache as lean as possible... and have not had these out of space errors so far today. 8. So the BTRFS errors and the "no space left on device" errors are dependent on whether I run the mover or not. 9. My system should have lots of free space on the cache and so it's not clear to me why it thinks it runs out of space unless the mover moves 10-20GB off the cache on an hourly basis. Main tab reports 156/250 GB used, with 94GB free. I should not be out of space? 10. Even with all these "BTRFS error (device loop1)" in the log, the error stats are still 0. So wondering what you think... I realize I still need to rebuild my cache to fix my system... but when would we expect my SSD write errors to recur? Wouldn't writing 20GB to the SSDs last night cause the error counts to increment? Is it time to replace SSDs or wait a bit longer and try to just restore my cache on my current hardware? syslog.txt
  14. For #1, I'm getting set to do that and create an appdata backup. I'm just avoiding shutting down my dockers while my wife and kids are watching TV... WAF is an issue for me my server is mission critical. :-) But so... I could also just use midnight commander (mc) to make a full copy of my /mnt/cache folder to a backup folder on the array? That will work too? Do you have to shutdown dockers before doing a backup? I did have crashplan installed at one point and it was backing up appdata... but I'd rather not use crashplan if I can avoid it. Either #1 or mc sounds much easier to me.
  15. Thanks johnnie.black. Ran it just now, here is the result. It does seem like a lot of errors. I'm not sure when these stats were last reset... On a working system you will only ever see 0's here? Or can some kind of docker software issue also errors? ie. Plex transcoding issues, SageTV tuner hardware loses signal and results in corrupted TV mpg recording (this happens sometimes). I'm just trying to make sure... Are you suggesting I need to pull these SSD cards and put in new ones? I'll post updated status tomorrow... Right now, I still I mostly only have SageTV up and running and hitting the cache at up to 10-18GB/hr or so at times... So lots of IO goes to this cache on a daily basis. I think I need to shutdown my dockers to make a cache and appdata backup... I'm trying to figure out how to recover my system and rebuild my cache.... root@unraid:/mnt/cache# btrfs dev stats -z /mnt/cache [/dev/nvme0n1p1].write_io_errs 14402 [/dev/nvme0n1p1].read_io_errs 1 [/dev/nvme0n1p1].flush_io_errs 0 [/dev/nvme0n1p1].corruption_errs 0 [/dev/nvme0n1p1].generation_errs 0 [/dev/nvme1n1p1].write_io_errs 200298 [/dev/nvme1n1p1].read_io_errs 0 [/dev/nvme1n1p1].flush_io_errs 0 [/dev/nvme1n1p1].corruption_errs 0 [/dev/nvme1n1p1].generation_errs 0
  16. Thanks trurl. You are awesome. #1. But so I actually had installed CA backup/restore initially when I setup the box, but never ran it or set it up. My bad. Got a little lazy once my system was setup. It seems I have v1. So looks like I should uninstall and use v2 instead. I hope that works ok.. My system is clearly fragile right now. I cannot actually update any dockers right now. Maybe installing a new plugin will be fine. I'm not even sure I should try updating community apps if that makes things more unstable? #2. I'd like to recreate the dockers from templates as I have custom config I'm sure. Just to confirm I'm looking at the previous apps feature right now, and don't see my dockers (only see 4 dockers I don't use in screencap). I guess I will only see my dockers in there once I recreate a blank docker image? Just wanted to confirm.
  17. I think part of the issue/recommendation is that SageTV users may also be recording stuff 6-18 hours a day? So maybe that means the array will mostly be constantly spinning? Not sure. But I'll try moving the SageTV recordings share to the array when I get back to stable. I'm fine if it works and should be easy to test. I think I also saw a thread on moving the transcoding directory out of the plex docker which also sounds useful. I'll need to look at that too. But right now, I'm trying to figure out how to stabilize and recover my crippled system. Do you have any advice on my #1-3 above? Thanks!
  18. Ok... So while my SageTV, Deluge, Sickrage dockers all seem to still be up and running properly and some behaviour appears normal, my unraid setup is clearly having some issues. I tried stopping and starting my simple DuckDNS docker and it fails to start. "Server Execution Failure" (see screencap). I also cannot update any dockers. I cannot set my dockers to disable the autostart toggle in the UI. The toggle change is not preserved in the UI. I think if I reboot, which is what I tried last time, I will get the blank Dockers tab, and maybe a few hours later my dockers will all actually be started, and if I refresh the Dockers tab I will see my dockers again. I am trying to figure out how to do 3 things (and am searching the forum threads) in order of priority as I am assuming this is what I need to do: Backup my cache appdata folder immediately to save all my valuable config. I'm looking for a process or command line to do this. Ideally I guess I will just backup my whole /mnt/cache to somewhere on the array. Is there a recommended command or process for this? Rebuild my cache pool from scratch and recreate my docker.img. This seems daunting and a tad scary... but I want to get started. I'm looking for the "official guide" or thread for this too. Does anyone know? I'd like to test my cache SSDs to see if they are functioning correctly. Is there some kind of test I can run? I'm happy to buy new SSDs if necessary, or try to replace my premium Samsung SSDs under warranty... but would be nice to definitively confirm a hardware issue. Right now, I'm happy to try to restore/recover my crippled system.
  19. I don't have much on the cache. Just appdata, docker.img, and Windows 10 VM. And I also have the SageTV recordings share. I have 4 ATSC tuners on my network that the SageTV docker can use to record up to 4 HD TV shows at once. At maximum throughput, that works our to 6GB/hour per channel... Or ~24GB/hr. Though usually I'm only recording 1-2 shows at once. Recorded shows are then moved to the array by the mover. On the SageTV forums, I believe this is the recommended config for performance.
  20. Man... this sounds potentially terrible. But so my server is currently up, all my dockers seem to be running. I've changed the mover to run every 4 hours instead of 8 as it should keep more space available on the cache. I've shutdown Plex for now, just to see if that makes things more stable. I'm just missing my VM, but I can live without it for now. So I'm wondering how corrupted my system actually is. I'm not that experienced with doing low level maintenance, but I'd like to and should be able to do anything advised to fix the issues. This system has been really solid for year and the main load is the SageTV docker which results in 100GB+ of over-the-air TV mpg files being written/read to the cache on a daily basis. SageTV docker wrote 10-20GB overnight to the cache and it seems to be fine... Not sure how I got here as I thought hardware SSD issues are incredibly rare. Issues seemed to me to start within the last few weeks/days maybe and came to a head when I ran Plex transcoding last week for a bit. The main tab shows "0 errors" for my cached drives. How do I reset the stats and look for write errors over the next days to see if they are still happening? I would like to upgrade to the latest unRAID, but there seems to be be some compatibility issues with the SageTV docker so I was holding off until necessary. Might be best to stabilize what I have before I take that on. Right... I have raid1. Maybe I should just switch to raid0 then if I'm having cache issues? Not sure how to do that... Maybe I should try an xfs cache while I'm at it? Is there a good guide to "redo" my cache pool? Not sure how to do that or what's involved. I really want to preserve/save my system... I have not backed up anything on the cache at this point. I don't have a real backup strategy unfortuantely, so I'm wondering if there is anything I should do on that front immediately. I likely need to figure out how to save stuff on the cache in case things get worse. Maybe I should try the balance? So to do that I just run this command now and report back? btrfs balance start -dusage=75 /mnt/cache Thanks! Really appreciate any feedback and guidance.
  21. Hi JB, thanks for looking at my logs. Really appreciate it. But so here is an update... I have 2 Samsung EVO 960 250G SSDs running in a fault tolerant raid0 setup (followed SpaceInvader youtube to setup this up). My understanding is that with this setup if one SSD fails, the system stays online. In any event, I'm hoping it's not a hardware issue as my hardware has been stable and untouched for a year. I noticed that my dockers have actually returned this morning! I started the array last night and confirmed my docker tab was still empty. Then I started this post to see if anyone has any ideas. Up until now I did not want my crippled array left started, so I had the array stopped. But last night, I left the array started overnight and noticed my dockers were up in the morning when I got up. I'm not sure what could have happened other than maybe the mover ran and freed up some space? My windows 10VM is still missing from the VM tab, though I see my VM image here in this 42 GB file I have: /mnt/cache/domains/Windows 10/vdisk1.img. I'm kind of hoping there is a way to restore my windows VM and get it working, but I'm not sure how to do that... I think I have some kind of device space issue. The initial error in my log I noticed yesterday after rebooting and my docker tab was empty: Aug 9 21:17:01 unraid root: truncate: failed to truncate '/mnt/cache/system/docker/docker.img' at 21474836480 bytes: No space left on device Now, I'm looking at my system log while my dockers have returned and are running and I see "no space left on device" errors like those shown below. In other threads, I've seen that missing dockers may be related to a filled docker.img file? I'm trying to figure out how to check for that as I'm not clear if that is my situation. I'm also not understanding why my cache disks seem to report so much disk space is used. I have a 250GB cache (see main tab screenshot). The main tab UI show ~180GB used, and 70GB free. Yet if I eyeball the files I actually have on the cache drive, I have appdata (~10GB I think), domains (42GB Win10 VM), docker.img (20GB), libvrt.img (1GB), plus some transient media data that the mover moves to the array every 6 hours. So I think I should only have about 70-80GB used on the cache. How can I check where my 180GB is going? Is there a way to check the cache disks and make sure nothing is wrong? Thanks! -Glenner. ErrorWarningSystemArrayLogin Aug 11 16:01:17 unraid root: >f+++++++++ sagemedia/tv/TheLateShowWithStephenColbert-S03E189-JimAcostaNinaDobrev-8917033-12.mpg.properties Aug 11 16:01:17 unraid root: .d..t...... sagemedia/tv/ Aug 11 16:01:17 unraid root: .d..t...... sagemedia/ Aug 11 16:01:17 unraid root: mover finished Aug 11 22:29:20 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 11 22:29:25 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 11 22:29:25 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 11 22:29:27 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 11 22:29:30 unraid shfs/user: err: shfs_write: write: (28) No space left on device Aug 11 22:32:12 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.4.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.5.log (28) No space left on device Aug 11 22:32:13 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Cache/CloudAccess.dat.tmp.XXfhyapA /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Cache/CloudAccess.dat (28) No space left on device Aug 11 22:32:13 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.4.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.5.log (28) No space left on device Aug 11 22:32:17 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.agents.lastfm.log.2 /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.agents.lastfm.log.3 (28) No space left on device Aug 11 22:32:17 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.2.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.3.log (28) No space left on device Aug 11 22:32:17 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.lastfm/HTTP.system/63/._4190e53018e6273f7e438f695cb1505e21dde4_attributes /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.lastfm/HTTP.system/63/4190e53018e6273f7e438f695cb1505e21dde4_attributes (28) No space left on device Aug 11 22:32:17 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.lastfm/HTTP.system/._CacheInfo /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.lastfm/HTTP.system/CacheInfo (28) No space left on device Aug 11 22:32:17 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.lastfm/HTTP.system/._CacheInfo /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.lastfm/HTTP.system/CacheInfo (28) No space left on device Aug 11 22:32:17 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Metadata/Artists/c/5038b2eb1102b83f6ec48c3295dac0c1e7e057c.bundle/Contents/com.plexapp.agents.lastfm/._Info.xml /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Metadata/Artists/c/5038b2eb1102b83f6ec48c3295dac0c1e7e057c.bundle/Contents/com.plexapp.agents.lastfm/Info.xml (28) No space left on device Aug 11 22:32:18 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.agents.localmedia.log.4 /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.agents.localmedia.log.5 (28) No space left on device Aug 11 22:32:22 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system/._Dict /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system/Dict (28) No space left on device Aug 11 22:32:27 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.agents.htbackdrops.log.4 /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.agents.htbackdrops.log.5 (28) No space left on device Aug 11 22:32:27 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.htbackdrops/HTTP.system/ad/._3bcd937b316c57255c218179b0269c9b06a550_attributes /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.htbackdrops/HTTP.system/ad/3bcd937b316c57255c218179b0269c9b06a550_attributes (28) No space left on device Aug 11 22:32:27 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.htbackdrops/HTTP.system/._CacheInfo /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Caches/com.plexapp.agents.htbackdrops/HTTP.system/CacheInfo (28) No space left on device Aug 11 22:32:30 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Metadata/Artists/c/5038b2eb1102b83f6ec48c3295dac0c1e7e057c.bundle/Contents/_combined/._Info.xml /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Metadata/Artists/c/5038b2eb1102b83f6ec48c3295dac0c1e7e057c.bundle/Contents/_combined/Info.xml (28) No space left on device Aug 11 22:35:35 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/binhex-delugevpn/state/torrents.state /mnt/cache/appdata/binhex-delugevpn/state/torrents.state.bak (28) No space left on device Aug 11 22:38:55 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/binhex-delugevpn/state/torrents.state /mnt/cache/appdata/binhex-delugevpn/state/torrents.state.bak (28) No space left on device Aug 11 22:42:12 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.4.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.5.log (28) No space left on device Aug 11 22:42:15 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/binhex-delugevpn/state/torrents.state /mnt/cache/appdata/binhex-delugevpn/state/torrents.state.bak (28) No space left on device Aug 11 22:45:35 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/binhex-delugevpn/state/torrents.state.tmp /mnt/cache/appdata/binhex-delugevpn/state/torrents.state (28) No space left on device Aug 11 22:50:26 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Cache/CloudAccess.dat.tmp.XXYg3otU /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Cache/CloudAccess.dat (28) No space left on device Aug 11 22:52:12 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.4.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.5.log (28) No space left on device Aug 11 22:52:12 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.1.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.2.log (28) No space left on device Aug 11 22:53:48 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/sickrage/cache.db-journal /mnt/cache/appdata/sickrage/.fuse_hidden0012bf7600000056 (28) No space left on device Aug 11 22:55:02 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/sagetv/server/Sage.properties.tmp /mnt/cache/appdata/sagetv/server/Sage.properties (28) No space left on device Aug 11 23:02:12 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.4.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.5.log (28) No space left on device Aug 11 23:02:12 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.1.log (28) No space left on device Aug 11 23:02:13 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.4.log /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.5.log (28) No space left on device Aug 11 23:02:13 unraid shfs/user: err: shfs_rename: rename: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Cache/CloudAccess.dat.tmp.XX3nJR9Z /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Cache/CloudAccess.dat (28) No space left on device
  22. Hi all. I'm a bit stuck... and so looking for advice. While my unraid server has been rock solid for about a year, it is currently complete offline as all docker apps have gone. Here is summary of what I am seeing. Yesterday, noticed my Windows 10 VM was suspending/crashing frequently and so decided to reboot unraid. Also noticed some dockers were crashing and generally behaving erratically. On reboot and array start, I noticed my docker tab is completely empty. I'm missing the ~10 dockers I had installed and running. I'm also missing my configured Windows 10 VM on the VM tab. About a week ago, I noticed the server seemed to kind of crash while I was remotely using Plex to transcode a show to my phone. It was odd as I have used Plex transcode while away from home before and it worked fine. I rebooted in that case as well, and the dockers and VM started ok and seemed to run until the last few days where they became more erratic. Attached is a section of my System Log. I did notice this line which looks a bit unsettling: Aug 9 21:17:01 unraid root: truncate: failed to truncate '/mnt/cache/system/docker/docker.img' at 21474836480 bytes: No space left on device I'm scanning the forums to see if anyone has reported similar issues. I'd really like to get my system back online, and I'm *really* hoping I have not actually lost my dockers and VM as I have invested huge time in carefully setting up my ~10 dockers , and the Windows 10 VM. Thanks for any advice/info! -Glenner. This is my system info: unraid 6.3.5 Model: Custom M/B: ASUSTeK COMPUTER INC. - PRIME H270-PRO CPU: Intel® Core™ i7-7700 CPU @ 3.60GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB, 1024 kB, 8192 kB Memory: 32 GB (max. installable capacity 64 GB) Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.9.30-unRAID x86_64 OpenSSL: 1.0.2k SystemLog.txt
  23. Frank! Thanks a lot for this info.... I installed the "Tips and Tweaks" plugin and set disk cache settings as follows: vm.dirty_background_ratio = 1 % vm.dirty_ratio = 2 % I have not had an OOM situation since then in the last week or so. I'll report back if I see any OOM mover related errors this week, but for now, I think the issue is resolved. Thanks again! -Glen.
  24. I had one of my unraid dockers "crash" on 8/15 ~12am... My SageTV docker app used to record OTA TV shows effectively crashed and became unresponsive. At the time, it was recording 2 HD shows and thereby writing about 12GB/hour to the cache pool. Looking into it, I noticed that the mover is in fact also running at 12am and was trying to move 20-40GB of data (previously recorded HD shows) from the cache to the array. My mover is set to run every 8 hours (12am, 8am, 4pm). The mover has created all kinds of exceptions in the syslog between 12:00 and 12:03am, and it eventually killed one of my java processes (pid=14906. I believe this would be the SageTV java process). It seems to be using 6GB? I have 32 GB of RAM on the machine... Here are some excerpts of the log below, and I also attached the full log. Has anyone seen out of memory errors like this when the mover is running or anyone know what might be wrong or what I can do about it? Thanks! Aug 15 00:00:01 unraid root: mover started Aug 15 00:00:01 unraid root: moving "sagemedia" to array Aug 15 00:00:01 unraid root: .d..t...... ./ Aug 15 00:00:01 unraid root: .d..t...... sagemedia/ Aug 15 00:00:01 unraid root: .d..t...... sagemedia/tv/ Aug 15 00:00:01 unraid root: >f+++++++++ sagemedia/tv/WildKratts-S03E02-WheretheBisonRoam-7816684-0.mpg Aug 15 00:00:25 unraid kernel: warn_alloc: 1069190 callbacks suppressed Aug 15 00:00:25 unraid kernel: java: page allocation stalls for 10985ms, order:0, mode:0x34200ca(GFP_HIGHUSER_MOVABLE|__GFP_WRITE) Aug 15 00:00:25 unraid kernel: CPU: 4 PID: 19630 Comm: java Not tainted 4.9.30-unRAID #1 Aug 15 00:00:25 unraid kernel: Hardware name: System manufacturer System Product Name/PRIME H270-PRO, BIOS 0323 01/04/2017 Aug 15 00:00:25 unraid kernel: ffffc9000c59ba68 ffffffff813a4a1b 0000000000000001 0000000000000000 Aug 15 00:00:25 unraid kernel: ffffc9000c59baf8 ffffffff810cb5b1 034200ca810c9d8d ffffffff8193d4e2 Aug 15 00:00:25 unraid kernel: ffffc9000c59ba90 0000000000000010 ffffc9000c59bb08 ffffc9000c59baa8 Aug 15 00:00:25 unraid kernel: Call Trace: Aug 15 00:00:25 unraid kernel: [<ffffffff813a4a1b>] dump_stack+0x61/0x7e Aug 15 00:00:25 unraid kernel: [<ffffffff810cb5b1>] warn_alloc+0x102/0x116 Aug 15 00:00:25 unraid kernel: [<ffffffff810cbb67>] __alloc_pages_nodemask+0x541/0xc71 Aug 15 00:00:25 unraid kernel: [<ffffffff8167c00e>] ? __schedule+0x2b1/0x46a Aug 15 00:00:25 unraid kernel: [<ffffffff8107c0fd>] ? wake_up_bit+0x25/0x25 Aug 15 00:00:25 unraid kernel: [<ffffffff81245966>] ? fuse_request_free+0x3b/0x3e Aug 15 00:00:25 unraid kernel: [<ffffffff81102d82>] alloc_pages_current+0xbe/0xe8 Aug 15 00:00:25 unraid kernel: [<ffffffff810c4d78>] __page_cache_alloc+0x89/0x9f Aug 15 00:00:25 unraid kernel: [<ffffffff810c4ecc>] pagecache_get_page+0x13e/0x1e6 Aug 15 00:00:25 unraid kernel: [<ffffffff810c4f8f>] grab_cache_page_write_begin+0x1b/0x32 Aug 15 00:00:25 unraid kernel: [<ffffffff8124d8d7>] fuse_perform_write+0x186/0x484 Aug 15 00:00:25 unraid kernel: [<ffffffff81138461>] ? file_remove_privs+0x42/0x98 Aug 15 00:00:25 unraid kernel: [<ffffffff81153c96>] ? fsnotify_destroy_event+0x5d/0x64 Aug 15 00:00:25 unraid kernel: [<ffffffff811553eb>] ? inotify_handle_event+0xe2/0x100 ... ... Aug 15 00:03:08 unraid kernel: Mem-Info: Aug 15 00:03:08 unraid kernel: active_anon:967739 inactive_anon:16972 isolated_anon:0 Aug 15 00:03:08 unraid kernel: active_file:6219808 inactive_file:634419 isolated_file:2592 Aug 15 00:03:08 unraid kernel: unevictable:0 dirty:634162 writeback:1385 unstable:0 Aug 15 00:03:08 unraid kernel: slab_reclaimable:114589 slab_unreclaimable:140487 Aug 15 00:03:08 unraid kernel: mapped:21252 shmem:135948 pagetables:6557 bounce:0 Aug 15 00:03:08 unraid kernel: free:65981 free_pcp:103 free_cma:0 Aug 15 00:03:08 unraid kernel: Node 0 active_anon:3870956kB inactive_anon:67888kB active_file:24879232kB inactive_file:2537676kB unevictable:0kB isolated(anon):0kB isolated(file):10368kB mapped:85008kB dirty:2536648kB writeback:5540kB shmem:543792kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 2609152kB writeback_tmp:0kB unstable:0kB pages_scanned:475716 all_unreclaimable? no Aug 15 00:03:08 unraid kernel: Node 0 DMA free:15896kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15980kB managed:15896kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Aug 15 00:03:08 unraid kernel: lowmem_reserve[]: 0 2906 31944 31944 Aug 15 00:03:08 unraid kernel: Node 0 DMA32 free:126392kB min:12288kB low:15360kB high:18432kB active_anon:405944kB inactive_anon:0kB active_file:2286824kB inactive_file:235532kB unevictable:0kB writepending:234380kB present:3136448kB managed:3126452kB mlocked:0kB slab_reclaimable:57748kB slab_unreclaimable:5744kB kernel_stack:116kB pagetables:308kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Aug 15 00:03:08 unraid kernel: lowmem_reserve[]: 0 0 29038 29038 Aug 15 00:03:08 unraid kernel: Node 0 Normal free:121636kB min:122808kB low:153508kB high:184208kB active_anon:3465012kB inactive_anon:67888kB active_file:22592408kB inactive_file:2302676kB unevictable:0kB writepending:2307808kB present:30261248kB managed:29735764kB mlocked:0kB slab_reclaimable:400608kB slab_unreclaimable:556204kB kernel_stack:15244kB pagetables:25920kB bounce:0kB free_pcp:412kB local_pcp:0kB free_cma:0kB Aug 15 00:03:08 unraid kernel: lowmem_reserve[]: 0 0 0 0 Aug 15 00:03:08 unraid kernel: Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 3*4096kB (M) = 15896kB Aug 15 00:03:08 unraid kernel: Node 0 DMA32: 136*4kB (UMEH) 163*8kB (UMEH) 70*16kB (UEH) 34*32kB (UMEH) 185*64kB (UMEH) 159*128kB (UMEH) 83*256kB (UMEH) 41*512kB (UMEH) 19*1024kB (UME) 2*2048kB (M) 6*4096kB (M) = 126616kB Aug 15 00:03:08 unraid kernel: Node 0 Normal: 4508*4kB (ME) 11750*8kB (UMEH) 182*16kB (UME) 0*32kB 0*64kB 0*128kB 1*256kB (H) 1*512kB (H) 0*1024kB 1*2048kB (H) 1*4096kB (H) = 121856kB Aug 15 00:03:08 unraid kernel: 6992822 total pagecache pages Aug 15 00:03:08 unraid kernel: 0 pages in swap cache Aug 15 00:03:08 unraid kernel: Swap cache stats: add 0, delete 0, find 0/0 Aug 15 00:03:08 unraid kernel: Free swap = 0kB Aug 15 00:03:08 unraid kernel: Total swap = 0kB Aug 15 00:03:08 unraid kernel: 8353419 pages RAM Aug 15 00:03:08 unraid kernel: 0 pages HighMem/MovableOnly Aug 15 00:03:08 unraid kernel: 133891 pages reserved Aug 15 00:03:08 unraid kernel: [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name Aug 15 00:03:08 unraid kernel: [ 1390] 0 1390 6700 743 14 3 0 -1000 udevd Aug 15 00:03:08 unraid kernel: [ 1564] 0 1564 59436 689 25 4 0 0 rsyslogd Aug 15 00:03:08 unraid kernel: [ 1699] 81 1699 4900 60 14 3 0 0 dbus-daemon Aug 15 00:03:08 unraid kernel: [ 1707] 1 1707 3342 512 11 3 0 0 rpcbind Aug 15 00:03:08 unraid kernel: [ 1713] 32 1713 5352 1467 15 3 0 0 rpc.statd Aug 15 00:03:08 unraid kernel: [ 1723] 0 1723 1619 395 8 3 0 0 inetd Aug 15 00:03:08 unraid kernel: [ 1732] 0 1732 6120 762 17 3 0 -1000 sshd Aug 15 00:03:08 unraid kernel: [ 1746] 44 1746 26121 1281 26 3 0 0 ntpd Aug 15 00:03:08 unraid kernel: [ 1753] 0 1753 1095 29 7 3 0 0 acpid Aug 15 00:03:08 unraid kernel: [ 1762] 0 1762 1621 413 8 3 0 0 crond Aug 15 00:03:08 unraid kernel: [ 1764] 0 1764 1618 25 8 3 0 0 atd Aug 15 00:03:08 unraid kernel: [ 1790] 0 1790 2383 592 9 3 0 0 cpuload Aug 15 00:03:08 unraid kernel: [ 7810] 0 7810 75777 37884 137 3 0 0 php Aug 15 00:03:08 unraid kernel: [ 8701] 0 8701 22477 950 17 3 0 0 emhttp Aug 15 00:03:08 unraid kernel: [ 8702] 0 8702 1627 413 8 3 0 0 agetty Aug 15 00:03:08 unraid kernel: [ 8703] 0 8703 1627 390 8 3 0 0 agetty Aug 15 00:03:08 unraid kernel: [ 8704] 0 8704 1627 410 8 3 0 0 agetty Aug 15 00:03:08 unraid kernel: [ 8705] 0 8705 1627 399 7 3 0 0 agetty Aug 15 00:03:08 unraid kernel: [ 8706] 0 8706 1627 390 8 3 0 0 agetty Aug 15 00:03:08 unraid kernel: [ 8707] 0 8707 1627 391 7 3 0 0 agetty Aug 15 00:03:08 unraid kernel: [12491] 0 12491 55386 1472 106 3 0 0 nmbd Aug 15 00:03:08 unraid kernel: [12493] 0 12493 75119 3820 144 3 0 0 smbd Aug 15 00:03:08 unraid kernel: [12495] 0 12495 73584 1167 137 3 0 0 smbd-notifyd Aug 15 00:03:08 unraid kernel: [12496] 0 12496 73591 1332 137 3 0 0 cleanupd Aug 15 00:03:08 unraid kernel: [12500] 0 12500 68258 1990 130 3 0 0 winbindd Aug 15 00:03:08 unraid kernel: [12501] 0 12501 68150 1698 131 3 0 0 winbindd Aug 15 00:03:08 unraid kernel: [12515] 61 12515 8622 766 21 3 0 0 avahi-daemon Aug 15 00:03:08 unraid kernel: [12516] 61 12516 8557 63 21 3 0 0 avahi-daemon Aug 15 00:03:08 unraid kernel: [12525] 0 12525 3185 27 11 3 0 0 avahi-dnsconfd Aug 15 00:03:08 unraid kernel: [12915] 0 12915 68258 985 129 3 0 0 winbindd Aug 15 00:03:08 unraid kernel: [13255] 0 13255 55287 302 18 3 0 0 shfs Aug 15 00:03:08 unraid kernel: [13265] 0 13265 535312 32529 132 5 0 0 shfs Aug 15 00:03:08 unraid kernel: [13302] 0 13302 2382 606 9 3 0 0 diskload Aug 15 00:03:08 unraid kernel: [13445] 0 13445 1618 26 8 3 0 0 atd Aug 15 00:03:08 unraid kernel: [13446] 0 13446 2906 640 9 3 0 0 sh Aug 15 00:03:08 unraid kernel: [13447] 0 13447 43376 5464 74 3 0 0 startBackground Aug 15 00:03:08 unraid kernel: [13460] 0 13460 2907 644 10 3 0 0 sh Aug 15 00:03:08 unraid kernel: [13461] 0 13461 2907 687 10 3 0 0 script Aug 15 00:03:08 unraid kernel: [13465] 0 13465 1098 194 7 3 0 0 tail Aug 15 00:03:08 unraid kernel: [13532] 0 13532 225961 8847 74 5 0 -500 dockerd Aug 15 00:03:08 unraid kernel: [13547] 0 13547 126958 2699 37 6 0 -500 docker-containe Aug 15 00:03:08 unraid kernel: [13744] 0 13744 105820 784 27 6 0 -500 docker-containe Aug 15 00:03:08 unraid kernel: [13767] 0 13767 8081 1260 19 3 0 0 my_init Aug 15 00:03:08 unraid kernel: [13825] 0 13825 89436 780 25 6 0 -500 docker-containe Aug 15 00:03:08 unraid kernel: [13844] 0 13844 49 1 3 2 0 0 s6-svscan Aug 15 00:03:08 unraid kernel: [13938] 0 13938 49 1 3 2 0 0 s6-supervise Aug 15 00:03:08 unraid kernel: [14070] 0 14070 89724 775 25 6 0 -500 docker-containe Aug 15 00:03:08 unraid kernel: [14123] 0 14123 1126 18 7 3 0 0 sh Aug 15 00:03:08 unraid kernel: [14173] 0 14173 4548 104 13 3 0 0 noip.sh Aug 15 00:03:08 unraid kernel: [14192] 65534 14192 4273 46 13 3 0 0 noip2-x86_64 Aug 15 00:03:08 unraid kernel: [14225] 0 14225 5052 674 16 5 0 -500 docker-proxy Aug 15 00:03:08 unraid kernel: [14236] 0 14236 20923 703 16 5 0 -500 docker-proxy Aug 15 00:03:08 unraid kernel: [14246] 0 14246 4539 684 15 5 0 -500 docker-proxy Aug 15 00:03:08 unraid kernel: [14256] 0 14256 56703 1455 23 5 0 -500 docker-proxy Aug 15 00:03:08 unraid kernel: [14265] 0 14265 73052 775 25 5 0 -500 docker-containe Aug 15 00:03:08 unraid kernel: [14284] 0 14284 8532 1257 18 3 0 0 my_init Aug 15 00:03:08 unraid kernel: [14356] 0 14356 89500 787 25 5 0 -500 docker-containe Aug 15 00:03:08 unraid kernel: [14376] 0 14376 7269 1248 17 3 0 0 my_init Aug 15 00:03:08 unraid kernel: [14466] 0 14466 1098 21 7 3 0 0 runsvdir Aug 15 00:03:08 unraid kernel: [14467] 0 14467 1060 18 6 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14468] 0 14468 1060 18 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14469] 0 14469 1060 17 6 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14470] 0 14470 1060 17 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14471] 0 14471 1060 17 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14472] 0 14472 1060 17 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14473] 0 14473 46482 710 77 3 0 0 openbox Aug 15 00:03:08 unraid kernel: [14474] 0 14474 18114 400 39 3 0 0 syslog-ng Aug 15 00:03:08 unraid kernel: [14475] 0 14475 5320 56 14 3 0 0 run Aug 15 00:03:08 unraid kernel: [14476] 0 14476 1904 17 9 3 0 0 tail Aug 15 00:03:08 unraid kernel: [14477] 0 14477 5318 56 15 3 0 0 run Aug 15 00:03:08 unraid kernel: [14478] 0 14478 19076 3300 31 3 0 0 Xvnc Aug 15 00:03:08 unraid kernel: [14480] 0 14480 5329 76 15 3 0 0 bash Aug 15 00:03:08 unraid kernel: [14482] 0 14482 1699563 202924 566 9 0 0 java Aug 15 00:03:08 unraid kernel: [14508] 0 14508 24760 3365 49 3 0 0 python Aug 15 00:03:08 unraid kernel: [14656] 0 14656 5320 67 15 3 0 0 startapp.sh Aug 15 00:03:08 unraid kernel: [14783] 0 14783 49 1 3 2 0 0 s6-supervise Aug 15 00:03:08 unraid kernel: [14785] 0 14785 49 1 3 2 0 0 s6-supervise Aug 15 00:03:08 unraid kernel: [14786] 0 14786 49 1 3 2 0 0 s6-supervise Aug 15 00:03:08 unraid kernel: [14788] 106 14788 11231 128 26 3 0 0 avahi-daemon Aug 15 00:03:08 unraid kernel: [14789] 105 14789 10723 103 25 3 0 0 dbus-daemon Aug 15 00:03:08 unraid kernel: [14792] 99 14792 190996 17926 387 3 0 0 Plex Media Serv Aug 15 00:03:08 unraid kernel: [14829] 99 14829 444237 22233 148 5 0 0 Plex Script Hos Aug 15 00:03:08 unraid kernel: [14860] 99 14860 3206 59 11 3 0 0 startsagecore Aug 15 00:03:08 unraid kernel: [14872] 0 14872 1098 22 7 3 0 0 runsvdir Aug 15 00:03:08 unraid kernel: [14873] 0 14873 1060 17 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14874] 0 14874 1060 18 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14875] 0 14875 1060 17 6 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14876] 0 14876 1060 17 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [14878] 0 14878 1904 18 9 3 0 0 tail Aug 15 00:03:08 unraid kernel: [14879] 0 14879 7318 60 19 3 0 0 cron Aug 15 00:03:08 unraid kernel: [14880] 0 14880 18114 404 38 3 0 0 syslog-ng Aug 15 00:03:08 unraid kernel: [14906] 99 14906 1666125 314068 843 9 0 0 java Aug 15 00:03:08 unraid kernel: [15088] 99 15088 86737 13555 147 3 0 0 Plex DLNA Serve Aug 15 00:03:08 unraid kernel: [15096] 99 15096 175075 579 62 3 0 0 Plex Tuner Serv Aug 15 00:03:08 unraid kernel: [15270] 0 15270 1082950 72118 327 8 0 0 java Aug 15 00:03:08 unraid kernel: [15360] 0 15360 18821 853 38 3 0 0 virtlockd Aug 15 00:03:08 unraid kernel: [15366] 0 15366 35734 983 41 3 0 0 virtlogd Aug 15 00:03:08 unraid kernel: [15382] 0 15382 54005 2967 75 3 0 0 libvirtd Aug 15 00:03:08 unraid kernel: [15510] 99 15510 4378 493 13 3 0 0 dnsmasq Aug 15 00:03:08 unraid kernel: [15511] 0 15511 4345 53 12 3 0 0 dnsmasq Aug 15 00:03:08 unraid kernel: [15651] 0 15651 1099 21 6 3 0 0 runsvdir Aug 15 00:03:08 unraid kernel: [15652] 0 15652 1061 18 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [15653] 0 15653 1061 17 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [15654] 0 15654 1061 17 7 3 0 0 runsv Aug 15 00:03:08 unraid kernel: [15655] 0 15655 5741 59 14 3 0 0 run Aug 15 00:03:08 unraid kernel: [15656] 0 15656 70062 39114 139 3 0 0 squeezeboxserve Aug 15 00:03:08 unraid kernel: [15658] 0 15658 2331 17 7 3 0 0 tail Aug 15 00:03:08 unraid kernel: [ 839] 1000 839 94791 5590 151 3 0 0 smbd Aug 15 00:03:08 unraid kernel: [13182] 0 13182 104077 6130 177 3 0 0 smbd Aug 15 00:03:08 unraid kernel: [ 7284] 99 7284 223937 8800 101 3 0 0 Plex Script Hos Aug 15 00:03:08 unraid kernel: [ 7376] 99 7376 222014 8872 96 4 0 0 Plex Script Hos Aug 15 00:03:08 unraid kernel: [12717] 99 12717 78857 46008 120 4 0 0 comskip Aug 15 00:03:08 unraid kernel: [ 7879] 99 7879 33345 1507 68 3 0 0 Plex Transcoder Aug 15 00:03:08 unraid kernel: [ 8790] 0 8790 2374 625 9 3 0 0 sh Aug 15 00:03:08 unraid kernel: [ 8793] 0 8793 2397 661 9 3 0 0 mover Aug 15 00:03:08 unraid kernel: [ 8794] 0 8794 1622 419 8 3 0 0 logger Aug 15 00:03:08 unraid kernel: [ 8801] 0 8801 1087 171 7 3 0 0 move Aug 15 00:03:08 unraid kernel: [ 8813] 0 8813 3039 575 10 3 0 0 rsync Aug 15 00:03:08 unraid kernel: [ 8814] 0 8814 2944 396 10 3 0 0 rsync Aug 15 00:03:08 unraid kernel: [ 8815] 0 8815 3009 345 10 3 0 0 rsync Aug 15 00:03:08 unraid kernel: [ 9502] 0 9502 1094 18 7 3 0 0 sleep Aug 15 00:03:08 unraid kernel: [ 9659] 99 9659 190996 18028 363 3 0 0 Plex Media Serv Aug 15 00:03:08 unraid kernel: [ 9770] 0 9770 29108 2130 46 3 0 0 php Aug 15 00:03:08 unraid kernel: [ 9779] 0 9779 2374 630 9 3 0 0 sh Aug 15 00:03:08 unraid kernel: [ 9784] 0 9784 5375 123 14 3 0 0 awk Aug 15 00:03:08 unraid kernel: Out of memory: Kill process 14906 (java) score 38 or sacrifice child Aug 15 00:03:08 unraid kernel: Killed process 14906 (java) total-vm:6664500kB, anon-rss:1256272kB, file-rss:0kB, shmem-rss:0kB Aug 15 00:03:08 unraid kernel: oom_reaper: reaped process 14906 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB Aug 15 00:04:35 unraid root: .d..t...... sagemedia/tv/ Aug 15 00:04:35 unraid root: >f+++++++++ sagemedia/tv/WildKratts-S03E02-WheretheBisonRoam-7816684-0.log Aug 15 00:04:36 unraid root: >f+++++++++ sagemedia/tv/WildKratts-S03E02-WheretheBisonRoam-7816684-0.txt Aug 15 00:04:36 unraid root: .d..t...... sagemedia/tv/ syslog-20170811.18.10.04.358914876.txt