Majawat

Members
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Majawat

  • Rank
    Newbie
  • Birthday July 6

Converted

  • Gender
    Male
  1. I figured out what is the cause of the server shutting off: https://imgur.com/a/RGSmkJz. Burnt 24 pin connector. Not sure if it's the power supply or the motherboard's fault. But time for some replacement parts. At least motherboard and power supply it seems. Wonder if I should replace it with some newer hardware, and if so, which...
  2. Is that the one from PassMark or the open source one? Or does it matter?
  3. The performance issues happen without a party check occuring, but I'm guessing to much other stuff is occuring to. But I agree. I need to move the system share to ssd. That is the Prefer Cache setting, correct? I also now plan on getting a few extra SSDs and building a docker/vm cache separate from the data ingestion cache. Thanks everyone
  4. No, dual xeons. Interestingly, it does at least show the full capacity there: 48gb
  5. Ok, couldn't sleep, so I stopped the memtest and got the syslog and timestamps. Pings showing it went down and when: 5:27:14.78 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:27:15.81 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:27:16.82 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:27:17.85 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:27:22.80 Request timed out. 5:27:32.80 Request timed out. ... (truncated) 5:29:43.29 Request timed out. 5:29:45.32 Request timed out. 5:29:47.35 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64
  6. Oooh, I understand now. My understanding was the diagnostics grabbed the syslogs created by that setting. I get that it's a different file now. I'll post it in the morning (3am now). I'm also running a memtest now. Thank you for your patience
  7. I mean the whole server stops and restarts all on its own. A non-graceful shutdown. Then it comes back up, starts a parity check, and I download the diags. Almost like a blue screen in Windows, but I don't see anything like that screen here.
  8. So it's primarily the speed of the mechanical array not being able to keep up with the data requirements and I could help this out by having a large SSD/NVME cache pool and moving my dockers/VMs to a separate SSD/NVME cache pool?
  9. Anyway I can "fix" this? Is it just my processors aren't fast enough? Something else? How can I find out what resource is limiting it? I don't think I see the whole CPUs as pegged, just a number of threads. For example the picture above is only at 47% used. Is that really enough to cause this issue? I really want to be able to have multiple copy and downloading jobs occuring at the same time to make full use of my system
  10. Yes, some dockers access the array. Deluge and sabNZBD job is primarily download to the array. And sometimes I use the VMs to write or edit on the array as well.
  11. I turned on two docker containers to help me test another issue: https://forums.unraid.net/topic/107528-docker-containers-become-slowunusable-during-large-data-movement/ Then everything was ok for a while there. Then I turned on a single VM (not the new-ish one, one I've had for a long time). And then pretty quickly got a crash. Specifically, I turned on the vm called Hraf. Though what's the liklihood that I'd choose the one VM with an issue? I'm guessing it's more so that I'm using any VM that's causing the crashing... I'll try other ones and see what happens. E
  12. Every so often, unRAID will have an automated process do a large data movement. Either a torrent is finishing up and moving to its final location, NZB is unpacking a large file, etc. Sometimes it'll be me copying data from another machine to unRAID. At least I think that's what's the 'cause' of what the issue is. This issue is that all my docker containers (and maybe even VMs) will slow down to a crawl. To the point where they're effectively unusable. I've got monitoring from HetrixTools pinging the services and it shows me this happens several times a day. All of those yellow and
  13. I turned off all my Docker containers and my VMs, and it hasn't crashed in a while now. I'm going to slowly turn on one at a time and see what thing is doing it. I have a feeling it's my new-ish W7 VM, which I hope not.
  14. But I did have a crash before I grabbed the diags. I've had a few now since then, here's another diag file taking just after another crash and right after it came back up and started another parity check. hathor-diagnostics-20210427-1326.zip
  15. Here are new logs after turning on Mirror syslog to flash, after next crash. hathor-diagnostics-20210427-1109.zip