-
Posts
425 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by flyize
-
-
2 hours ago, JorgeB said:
Nothing obvious in the partial log posted, I would recommend posting the complete syslog, some issues are known to leave call traces days before crashing.
Went ahead and removed email addresses. Here it is. Thank you so much for any help.
-
2 minutes ago, JorgeB said:
Nothing obvious in the partial log posted, I would recommend posting the complete syslog, some issues are known to leave call traces days before crashing.
Anything PII I should remove?
-
Server is down again. Please can anyone help?
-
So I think I resolved the master browser issue by letting Home Assistant handle it (since it irritatingly has no way to control it). It's been almost 24 hours and I'm crossing my fingers. Seems really unlikely that fixed it.
-
I saw that too. Never noticed those errors before. I'll get that fixed. Probably unrelated though, correct?
-
And now Home Assistant is down.
-
It just happened again. I can still ping the server, but can't get to web UI or ssh in. However, my Home Assistant VM is still up and running just fine.
-
When this happened yesterday, I was able to eventually get Pihole to load, which showed load at like 200 and memory at 100%. Server responds to ping and my Home Assistant VM continues to control devices. I set a syslog mirror, so that (final three hours) and diags are attached.
-
I don't want to muddy your thread, but my server is unresponsive after a day or two. Responds to pings, but can't do anything else until I reboot.
-
I just started having this issue as well and have enabled the syslog server to flash. There's three threads on the first page about very similar issues. Is this a regular issue that guys deal with users or are we just unlucky?
-
Why is it that these settings aren't maintained through an Unraid upgrade?
-
4 hours ago, taprackpew said:
I ended up buying a hardware controller and removing this from my server. But I’m still curious as to what I may have done wrong. Are you saying I setup the template wrong?
No, the default template is wrong. It has directories that are some weird path like /mnt/extrassds or something. That is not a valid path.
-
I just (re)installed
On 7/26/2023 at 10:19 PM, taprackpew said:they are the default values of 18043 for https and 18088 for http. when I click the webui button it opens https://192.168.10.5:18043/login , which is correct. however, I get a "cannot connect to server" error page. its odd for sure as all of my other apps are perfectly accessible.
It looks like this is because OP left his non-standard pathing in there for the data/log/etc.
-
Can anyone assist? This is really crushing my server.
-
-
*bump*
I've also been seeing this error a lot with the last two point releases of 6.11. Anyone got any ideas?
Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 17815. Increase nchan_max_reserved_memory. Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: *1023248 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /disks Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [crit] 8515#8515: ngx_slab_alloc() failed: no memory Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: shpool alloc failed Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 22486. Increase nchan_max_reserved_memory. Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: *1023263 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /devices Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [crit] 8515#8515: ngx_slab_alloc() failed: no memory Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: shpool alloc failed Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 14756. Increase nchan_max_reserved_memory. Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: *1023268 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/update2?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /update2 Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [crit] 8515#8515: ngx_slab_alloc() failed: no memory Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: shpool alloc failed Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 17815. Increase nchan_max_reserved_memory. Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: *1023269 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /disks Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [crit] 8515#8515: ngx_slab_alloc() failed: no memory Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: shpool alloc failed Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 22490. Increase nchan_max_reserved_memory. Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: *1023272 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /devices
-
I know the drill. No idea why I didn't do that.
That said, I was able to get things working by setting a New Config. I'm now working to remove the offending drives.
-
I'm thinking I can just assign the drives to their respective places. Then do a New Config, and preserve all assignments. Will that work? After that, I'll certainly run a parity check.
-
Wait, they're back but show unassigned. How do I figure out what slot they go in?
edit: Okay, I've verified that they all have the proper data on them. And I found an old diag that shows their position. But when I go to add them to the array, they show a blue icon - which makes me think Unraid thinks they're new drives.
-
Assuming these drives are bad, is there a way to replace them with a single 16TB? If parity does emulation, can I move the contents of those drives?
-
As above. Seems odd that two drives would die at EXACTLY the same time, but of course its possible. What should be my course of action here?
-
On 4/1/2023 at 9:03 AM, hedrinbc said:
Made update to my github for the template and notified Squid.
Change was to add "--ulimit nofile=4096:8192" (no quotes) to the "Extra Parameters:" field you will see when you toggle to "Advanced View" in your omada-controller template.
Please get back if any issues arrise,
-Ben
I had to stop using this container a while back due to poor wifi performance. Any chance this may have been the cause? I'd really rather run it in Docker than my utility VM...
-
I'm now seeing uncorrectable errors on this pool during a scrub. I assume that means I need to reformat. As mentioned above, do I just need to click 'erase'?
-
I'm an idiot. Thanks!
edit: If running out of space is bad, any reason there isn't something other than 0 that goes in there by default?
Server goes unresponsive daily, but still responds to pings
in General Support
Posted · Edited by flyize
My friends and family will kill me.
Wait, I just realized that I added a second NVMe for appdata a couple of weeks ago. It's been totally stable, but maybe that's it. I'll remove it. Then try to run memtest. Anything else I can 'actively' run to try and figure out the issue more quickly?
edit: Wait, how do I remove the mirrored NVMe?