• Posts

  • Joined

  • Last visited

About Majawat

  • Birthday July 6


  • Gender

Majawat's Achievements


Newbie (1/14)



  1. I figured out what is the cause of the server shutting off: Burnt 24 pin connector. Not sure if it's the power supply or the motherboard's fault. But time for some replacement parts. At least motherboard and power supply it seems. Wonder if I should replace it with some newer hardware, and if so, which...
  2. Is that the one from PassMark or the open source one? Or does it matter?
  3. The performance issues happen without a party check occuring, but I'm guessing to much other stuff is occuring to. But I agree. I need to move the system share to ssd. That is the Prefer Cache setting, correct? I also now plan on getting a few extra SSDs and building a docker/vm cache separate from the data ingestion cache. Thanks everyone
  4. No, dual xeons. Interestingly, it does at least show the full capacity there: 48gb
  5. Ok, couldn't sleep, so I stopped the memtest and got the syslog and timestamps. Pings showing it went down and when: 5:27:14.78 Reply from bytes=32 time<1ms TTL=64 5:27:15.81 Reply from bytes=32 time<1ms TTL=64 5:27:16.82 Reply from bytes=32 time<1ms TTL=64 5:27:17.85 Reply from bytes=32 time<1ms TTL=64 5:27:22.80 Request timed out. 5:27:32.80 Request timed out. ... (truncated) 5:29:43.29 Request timed out. 5:29:45.32 Request timed out. 5:29:47.35 Reply from bytes=32 time<1ms TTL=64 5:29:49.38 Reply from bytes=32 time=2ms TTL=64 5:29:51.41 Reply from bytes=32 time<1ms TTL=64 5:29:53.44 Reply from bytes=32 time<1ms TTL=64 5:29:55.47 Reply from bytes=32 time<1ms TTL=64 Attached is the syslog. At the time of crash, all I was doing was navigating from the Dashboard to the Main tab. No docker or VMs were started, and no parity check. Apr 28 05:22:33 Hathor rsyslogd: [origin software="rsyslogd" swVersion="8.2002.0" x-pid="14293" x-info=""] start Apr 28 05:25:37 Hathor ntpd[2090]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 28 05:30:23 Hathor root: Delaying execution of fix common problems scan for 10 minutes Apr 28 05:30:23 Hathor unassigned.devices: Mounting 'Auto Mount' Devices... Apr 28 05:30:23 Hathor emhttpd: Starting services... Apr 28 05:30:23 Hathor emhttpd: shcmd (81): /etc/rc.d/rc.samba restart It shows no logs immediately prior to that crash. Here are my syslog settings As I was gathering this information, it crashed again despite not using the system. I'm restarting the memtest. but for some reason it only shows 3 slots? syslog-
  6. Oooh, I understand now. My understanding was the diagnostics grabbed the syslogs created by that setting. I get that it's a different file now. I'll post it in the morning (3am now). I'm also running a memtest now. Thank you for your patience
  7. I mean the whole server stops and restarts all on its own. A non-graceful shutdown. Then it comes back up, starts a parity check, and I download the diags. Almost like a blue screen in Windows, but I don't see anything like that screen here.
  8. So it's primarily the speed of the mechanical array not being able to keep up with the data requirements and I could help this out by having a large SSD/NVME cache pool and moving my dockers/VMs to a separate SSD/NVME cache pool?
  9. Anyway I can "fix" this? Is it just my processors aren't fast enough? Something else? How can I find out what resource is limiting it? I don't think I see the whole CPUs as pegged, just a number of threads. For example the picture above is only at 47% used. Is that really enough to cause this issue? I really want to be able to have multiple copy and downloading jobs occuring at the same time to make full use of my system
  10. Yes, some dockers access the array. Deluge and sabNZBD job is primarily download to the array. And sometimes I use the VMs to write or edit on the array as well.
  11. I turned on two docker containers to help me test another issue: Then everything was ok for a while there. Then I turned on a single VM (not the new-ish one, one I've had for a long time). And then pretty quickly got a crash. Specifically, I turned on the vm called Hraf. Though what's the liklihood that I'd choose the one VM with an issue? I'm guessing it's more so that I'm using any VM that's causing the crashing... I'll try other ones and see what happens. Edit: It crashed with just a file copy job going, nothing else running; no VMs, no Dockers. Though I think a parity check was going. I'm going to restart in Safe Mode and see what happens.
  12. Every so often, unRAID will have an automated process do a large data movement. Either a torrent is finishing up and moving to its final location, NZB is unpacking a large file, etc. Sometimes it'll be me copying data from another machine to unRAID. At least I think that's what's the 'cause' of what the issue is. This issue is that all my docker containers (and maybe even VMs) will slow down to a crawl. To the point where they're effectively unusable. I've got monitoring from HetrixTools pinging the services and it shows me this happens several times a day. All of those yellow and red ! marks is this issue occurring. I'll look at the CPU usage and it'll show several of the threads being pegged at 100%. When I look at htop to see what it is, usually it's a VM as the highest process. However, for this test I had all my VMs turned off and only my Binhex-Deluge and NginxProxyManager docker containers running to verify the issue was still occurring with a 'light' workload. For example: Here's a test I did to hopefully gather more info. Instead of everything running, I only had Binhex-Deluge and NginxProxyManager docker containers running. And I was copying over some video files to a user share from my other desktop PC: Which was running anywhere between just 5MB/s and 60MB/s, which I feel is crazy slow? And there happened to be a parity check running (because unRAID keeps crashing, probably unrelated, this was happening prior to the crashing), but that doesn't seem to matter usually. Deluge kept losing connection to the webserver over and over again. Refreshing the page to reconnect took forever. What I don't know here is that: why doesn't HTOP match the Dashboard? Which one is correct? why does everything come to a crawl? The unRAID interface itself seems fine, just the stuff hosted on it. Is it just NginxProxyManager docker container not being able to handle the traffic to the internal services? I don't think so, because the unavailability even occurs when using local IPs Is the CPU actually being pegged hard enough to not allow web traffic to the internal services? is it something completely unrelated and this the CPU stuff is a red-herring? Like, is it something to do with the NIC or something? Some other setting? I do have my two NICs link aggregated with 802.3ad, but I didn't have this problem when I first set that up why is it happening with almost everything turned off? is there some hardware issue? I don't think I'm too low specced for what I'm trying to do, but maybe? Honestly, I'm just at a loss of why everything slows down so much when I'm using what I feel is basic file copies. Diagnostics ran during the issue attached. Thank you for reading. Please let me know if you need anymore information. Solution: Moving the Docker containers and the VMs onto their own SSD cache pools away from the data ingestion cache pool. I haven't technically attempted this as I had a hardware failure, but I feel it's the right answer.
  13. I turned off all my Docker containers and my VMs, and it hasn't crashed in a while now. I'm going to slowly turn on one at a time and see what thing is doing it. I have a feeling it's my new-ish W7 VM, which I hope not.
  14. But I did have a crash before I grabbed the diags. I've had a few now since then, here's another diag file taking just after another crash and right after it came back up and started another parity check.
  15. Here are new logs after turning on Mirror syslog to flash, after next crash.