Jump to content

willdouglas

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

0 Neutral

About willdouglas

  • Rank
    Member

Converted

  • Gender
    Male

Recent Profile Visitors

234 profile views
  1. Ten days of uptime. I'm comfortable leaving well enough alone. End thread.
  2. Figured I should follow up with this. I never figured out what the issue was or managed to pull usable info out of a crash. I replaced the USB drive with a newer drive, and with a fresh install only installed the official plex docker. All other apps are running in an unraid VM until I feel like investing time in this again.
  3. Alright, cutting down docker apps to sonarr, nzbget, and plex I've somehow extended average up-time to three or four days, and have a new symptom to add. If I catch the lockup after network access drops, but before hangcheck value past margin messages start appearing there is limited access to the box at console. I wasn't able to log in, but I was able to get it to respond to key presses at what felt like a fifteen to twenty second delay. I ended up rebooting rather than fight with it because there was a request for Plex access from family but I'll re-enable logging and see if I can catch it quickly enough. I do have an impi controller onboard, so I'll see if I can get into the box next time to poke around and maybe dump logs to another box for easy viewing.
  4. I'm going to, possibly unfairly, blame the binhex-radarr plugin. They did warn me, it said it was under active development but I've been using radarr for a while, how bad could it be?! I'm at just over 48 hours of uptime and going to put the box back in the crawlspace where it lives and continue without radarr for now, maybe run it on another host for the time being.
  5. Fail! Made it somewhere around nineteen hours I think. Going to leave Radarr disabled and see how long of an up time counter I can accrue. EDIT, forgot to mention I upgraded to 6.5.0 before starting radarr so I was on a fresh reboot and upgrade, possibly hosing my troubleshooting by changing a variable. It seemed so innocent, so safe! Uptime was great! FCPsyslog_tail.txt cadence-diagnostics-20180314-0542.zip
  6. 72 hours uptime. Enabled deluge, nzbget and sonarr all at the same time yesterday and triggered a couple searches to see if they were playing nicely together. No lockups in the last 24 hours. At this point I only have radarr and plexpy to re-enable and I'm kinda tempted to just not enable them, but I will forge ahead and enable radarr.
  7. I've broken 48 hours of uptime. Feel safe ruling out plex at this point. Time for the next app!
  8. I'm breaking records over here with twenty hours of uptime without a lockup. Once I hit 24 hours I'll start the first app, probably plex, then 24 hours later start another, and so on. I thought I had them all installed and running before the lockups started but might not have. I did update library shares before the crashes start so maybe it'll be tied to plex usage as people started adding load.
  9. I'm not seeing anything pop up using stress or memory testers. I also updated my post with the correct SAS controller, it's an LSI 9211-8i. I also verified it has the most recent IT firmware flashed - 20.00.07.00-IT, nothing newer on their site anyway. Going to start trying permutations of apps I guess. I feel like hardware is good, only new purchases for this were drives and controller, server previously did duty as a virtualization server for training so if it was going to flake I hope I'd have seen it previously impacting VMs.
  10. Have you observed any output at console when the system locks up? I'm seeing a similar issue and am also elbow deep in hardware tests that seem to pass with flying colors.
  11. Logs from second lockup. cadence-diagnostics-20180310-0257.zip FCPsyslog_tail.txt
  12. Daily lockup achieved, logs captured and attached. cadence-diagnostics-20180309-1217.zip FCPsyslog_tail.txt
  13. I will do that now. 24 hours of memory tests haven't revealed any issues so it's time for the next thing.