Majawat

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by Majawat

  1. I figured out what is the cause of the server shutting off: https://imgur.com/a/RGSmkJz. Burnt 24 pin connector. Not sure if it's the power supply or the motherboard's fault. But time for some replacement parts. At least motherboard and power supply it seems. Wonder if I should replace it with some newer hardware, and if so, which...
  2. Is that the one from PassMark or the open source one? Or does it matter?
  3. The performance issues happen without a party check occuring, but I'm guessing to much other stuff is occuring to. But I agree. I need to move the system share to ssd. That is the Prefer Cache setting, correct? I also now plan on getting a few extra SSDs and building a docker/vm cache separate from the data ingestion cache. Thanks everyone
  4. No, dual xeons. Interestingly, it does at least show the full capacity there: 48gb
  5. Ok, couldn't sleep, so I stopped the memtest and got the syslog and timestamps. Pings showing it went down and when: 5:27:14.78 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:27:15.81 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:27:16.82 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:27:17.85 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:27:22.80 Request timed out. 5:27:32.80 Request timed out. ... (truncated) 5:29:43.29 Request timed out. 5:29:45.32 Request timed out. 5:29:47.35 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:29:49.38 Reply from 192.168.9.10: bytes=32 time=2ms TTL=64 5:29:51.41 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:29:53.44 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 5:29:55.47 Reply from 192.168.9.10: bytes=32 time<1ms TTL=64 Attached is the syslog. At the time of crash, all I was doing was navigating from the Dashboard to the Main tab. No docker or VMs were started, and no parity check. Apr 28 05:22:33 Hathor rsyslogd: [origin software="rsyslogd" swVersion="8.2002.0" x-pid="14293" x-info="https://www.rsyslog.com"] start Apr 28 05:25:37 Hathor ntpd[2090]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 28 05:30:23 Hathor root: Delaying execution of fix common problems scan for 10 minutes Apr 28 05:30:23 Hathor unassigned.devices: Mounting 'Auto Mount' Devices... Apr 28 05:30:23 Hathor emhttpd: Starting services... Apr 28 05:30:23 Hathor emhttpd: shcmd (81): /etc/rc.d/rc.samba restart It shows no logs immediately prior to that crash. Here are my syslog settings As I was gathering this information, it crashed again despite not using the system. I'm restarting the memtest. but for some reason it only shows 3 slots? syslog-192.168.9.10.log
  6. Oooh, I understand now. My understanding was the diagnostics grabbed the syslogs created by that setting. I get that it's a different file now. I'll post it in the morning (3am now). I'm also running a memtest now. Thank you for your patience
  7. I mean the whole server stops and restarts all on its own. A non-graceful shutdown. Then it comes back up, starts a parity check, and I download the diags. Almost like a blue screen in Windows, but I don't see anything like that screen here.
  8. So it's primarily the speed of the mechanical array not being able to keep up with the data requirements and I could help this out by having a large SSD/NVME cache pool and moving my dockers/VMs to a separate SSD/NVME cache pool?
  9. Anyway I can "fix" this? Is it just my processors aren't fast enough? Something else? How can I find out what resource is limiting it? I don't think I see the whole CPUs as pegged, just a number of threads. For example the picture above is only at 47% used. Is that really enough to cause this issue? I really want to be able to have multiple copy and downloading jobs occuring at the same time to make full use of my system
  10. Yes, some dockers access the array. Deluge and sabNZBD job is primarily download to the array. And sometimes I use the VMs to write or edit on the array as well.
  11. I turned on two docker containers to help me test another issue: https://forums.unraid.net/topic/107528-docker-containers-become-slowunusable-during-large-data-movement/ Then everything was ok for a while there. Then I turned on a single VM (not the new-ish one, one I've had for a long time). And then pretty quickly got a crash. Specifically, I turned on the vm called Hraf. Though what's the liklihood that I'd choose the one VM with an issue? I'm guessing it's more so that I'm using any VM that's causing the crashing... I'll try other ones and see what happens. Edit: It crashed with just a file copy job going, nothing else running; no VMs, no Dockers. Though I think a parity check was going. I'm going to restart in Safe Mode and see what happens.
  12. Every so often, unRAID will have an automated process do a large data movement. Either a torrent is finishing up and moving to its final location, NZB is unpacking a large file, etc. Sometimes it'll be me copying data from another machine to unRAID. At least I think that's what's the 'cause' of what the issue is. This issue is that all my docker containers (and maybe even VMs) will slow down to a crawl. To the point where they're effectively unusable. I've got monitoring from HetrixTools pinging the services and it shows me this happens several times a day. All of those yellow and red ! marks is this issue occurring. I'll look at the CPU usage and it'll show several of the threads being pegged at 100%. When I look at htop to see what it is, usually it's a VM as the highest process. However, for this test I had all my VMs turned off and only my Binhex-Deluge and NginxProxyManager docker containers running to verify the issue was still occurring with a 'light' workload. For example: Here's a test I did to hopefully gather more info. Instead of everything running, I only had Binhex-Deluge and NginxProxyManager docker containers running. And I was copying over some video files to a user share from my other desktop PC: Which was running anywhere between just 5MB/s and 60MB/s, which I feel is crazy slow? And there happened to be a parity check running (because unRAID keeps crashing, probably unrelated, this was happening prior to the crashing), but that doesn't seem to matter usually. Deluge kept losing connection to the webserver over and over again. Refreshing the page to reconnect took forever. What I don't know here is that: why doesn't HTOP match the Dashboard? Which one is correct? why does everything come to a crawl? The unRAID interface itself seems fine, just the stuff hosted on it. Is it just NginxProxyManager docker container not being able to handle the traffic to the internal services? I don't think so, because the unavailability even occurs when using local IPs Is the CPU actually being pegged hard enough to not allow web traffic to the internal services? is it something completely unrelated and this the CPU stuff is a red-herring? Like, is it something to do with the NIC or something? Some other setting? I do have my two NICs link aggregated with 802.3ad, but I didn't have this problem when I first set that up why is it happening with almost everything turned off? is there some hardware issue? I don't think I'm too low specced for what I'm trying to do, but maybe? Honestly, I'm just at a loss of why everything slows down so much when I'm using what I feel is basic file copies. Diagnostics ran during the issue attached. Thank you for reading. Please let me know if you need anymore information. hathor-diagnostics-20210427-1809.zip Solution: Moving the Docker containers and the VMs onto their own SSD cache pools away from the data ingestion cache pool. I haven't technically attempted this as I had a hardware failure, but I feel it's the right answer.
  13. I turned off all my Docker containers and my VMs, and it hasn't crashed in a while now. I'm going to slowly turn on one at a time and see what thing is doing it. I have a feeling it's my new-ish W7 VM, which I hope not.
  14. But I did have a crash before I grabbed the diags. I've had a few now since then, here's another diag file taking just after another crash and right after it came back up and started another parity check. hathor-diagnostics-20210427-1326.zip
  15. Here are new logs after turning on Mirror syslog to flash, after next crash. hathor-diagnostics-20210427-1109.zip
  16. Honestly, I have no idea what's going on. But recently, I'm having unRAID crashes. I was on 6.9.1 with some crashes, then updated to 6.9.2. Attached are my diags after such a crash. Happened in 6.9.1 and now 6.9.2 as well. And from what I barely understand, I don't have any dockers with custom IPs so not a macvlan issue. Please correct me if I'm wrong on these assumptions. Unfortunately I really only find out after the crash when I get a notification that a parity check has started. As a result, I'm not really sure of what's going on. I'm also not physically near my hardware, it's at my brother's house (has better internet). But can IPMI into the box. And of course, I'm not quite knowledgeable about all this stuff, so please let me know if you need more information or anything. Much appreciated! hathor-diagnostics-20210427-1015.zip
  17. I had this issue before. Here's the link, and diags are attached. Disk 5 is disabled, and says it's emulated. Again. Last time, I unplugged and replugged power and data cables. The cable is a SAS to 4 SATA cable. After doing so, the array rebuilt just fine and all was good for a couple months, but now I'm back. Is it time for a new drive? If not, what should I do? hathor-diagnostics-20200329-1757.zip
  18. I had this same problem with Disk 5 previously: I was able to get it back online via the helpful instructions by Constructor, shown here: https://wiki.unraid.net/Troubleshooting#Re-enable_the_drive I do plan on performing those steps to get this drive back up. It's also one of my older drives. But I have a few questions: Should I be worried that a second drive has went into disk dsbl? What can I do to prevent this from happening to these or other drives? What caused this/the other drive to go into disk dsbl in the first place? Is there anything else I should know before continuing? diagnostics attached. hathor-diagnostics-20191112-1301.zip
  19. Ah! Gotcha Ah, had to look, it's my PRTG server. I can turn off the ssh check, it wasn't super helpful. I can never find the cache settings. Where is that setting to fix the cache-yes? Which should it be? What are all the steps rebuild the disk5 to itself? Thanks for your help!
  20. Yep! I have several docker containers sitting on the box, like an nginx reverse proxy directing to other containers and server in the network But I have no ports forwarded to the ports of the unRaid OS itself. Nor do I have it in the router's DMZ.
  21. Ok, forgive me, I'm not sure of any of that information. Still new to something like this. How do I tell if it is or not? Right next to that disk in the Main tab it just has a red X. Not sure the answer to that. Nor how I find out or how to resolve.
  22. Got two notifications today. Attached are my diagnostics. I haven't done anything to it yet, don't want to potentially do anything bad. I don't have an extra drive laying around so if this one is toast, I'm out buying a new one today. hathor-diagnostics-20190920-2200.zip