• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About howiser1

  • Rank
  1. I too am now having the same issues as above... no point in duplicating. I had no issues on NC 17 or 18. It was only when I went to NC 20.0.4 that this started to occur... So it is NC? As mentioned above, I recently added (on NC 20) a binding/redirect from the docker /tmp path back to my cache disk. After just locking up again, I'm going to remove that... and see what happens. What has been consistent on the "crash" is accessing NC via the iphone IOS app. For whatever reason that seems to be the the "thing" the locks up the docker. Not always, but at leas
  2. I'm having a very similar issue - how did you solve? I never saw a follow up or response to your post?
  3. I'm currently now having this same issue with the Nextcloud docker after upgrading to NC 20.0.4 I wonder if it's Nextcloud... Didn't have any issues on v17. I've also try starting with a fresh new copy of the Netcloud docker along with a new MariaDB - didn't seem to help. Curious if you ever solved this?
  4. Hi, Just started getting this error, which is rather odd, I have 20GB of RAM. However I do get random freezes every few weeks, maybe this is the cause? I mirror my syslog file, so it's quite large, to catch these kinds of issues. Server was locked up/frozen when I tried to access it today. Full Diag Attached. First Instance Scroll to: Dec 10 22:42:43 Blackbox kernel: Out of memory: Kill process 10083 (smbd-notifyd) score 680 or sacrifice child Thanks for any and
  5. Thanks for getting back to me quickly. Well that is unfortunate, but it probably makes the most sense. Based on my current setup I really can't run with the system down (no VM/dockers) for extended weeks until this would occur again. Most times it takes weeks and/or months for it to crop up, again. Point being, it's pretty infrequent, which makes this really hard to troubleshoot and with no log errors or crash dumps, probably impossible to pin-point the piece of hardware. Actually just about the time I forget the last time it locked up, it does it again.
  6. Hi all I've been chasing this issue for some time now; even through several versions of OS. This is not something new on 6.8.3 (current os). Unraid will randomly just lockup, freeze, become unresponsive. I've tried using a COM/Serial port so I can monitor / access the system, and well that's locked up too. No console/keyboard access nor http either. Finally with the mirror option I was able to get a full syslog when this occurred. Unfortunately, the log doesn't show any issue when the lockup occurred. Full Diags are attached. Lockup occurred some time after Aug 7 --
  7. Thanks -- it's a break out cable from a LSI HBA controller. So not as simple as a single SATA cable replacement. Not sure what you mean by both cables? I'll re-seat all cables and then rebuild. If the issue comes back - I'll replace that entire breakout cable. Thanks again for your prompt review!
  8. Full Diag attached. Also the extended SMART passed with no issues.
  9. Disk 1 logged read disk errors and is now disabled. Here's the snippet from the logs. If you need a full Diag I'm happy to post. I did a Short SMART - and it came back: -- Completed without error -- Going to kick off an extended SMART overnight. No SMART errors or issues prior to this. Jun 15 07:41:57 Tower kernel: mdcmd (99): spindown 2 Jun 15 07:41:59 Tower kernel: mdcmd (100): spindown 5 Jun 15 07:42:04 Tower kernel: mdcmd (101): spindown 0 Jun 15 07:42:05 Tower kernel: mdcmd (102): spindown 6 Jun 15 10:48:30 Tower kernel: mdcmd (103): spindown 2
  10. Thanks Appreciate the explanation of Raw_Read_Errors_Rate. It is frustrating that unRAID hasn't come up with a better way to preserve the Syslogs... like also writing to a cached drive. Anything you've come across to do this other than a Syslog server? (which I'm thinking about). As I mentioned I copied the emulated data, just in case, to other drives on the array. I've added the failed disk back to the array and it is currently rebuilding. Thanks again for your prompt response.
  11. Hi All, Well I joined the club of a failed Drive this week. I've read numerous post on how to proceed forward, so I'll post my server logs / diag and ask if someone with more experience could just review and provide any additional advice. The failed drive is #3 ST4000DM004-2CV104_ZFN00H90 - 4 TB (sdf). Yes it's a Seagate... I read plenty of bad stuff about these already. BUT they have been working fine. Note the Raw_Read_Error_Rate is pretty high on the SMART logs... However there doesn't seem to be a definitive answer on whether this values is "bad" or suggest
  12. I just got my first Call Trace error. Diag attached. Looks NIC related based on previous post. Just please confirm and let me know if there is anything I can do.