Jump to content

flyize

Members
  • Posts

    435
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by flyize

  1. Thanks for any help guys! Diags attached truffle-diagnostics-20230724-0921.zip
  2. *bump* I've also been seeing this error a lot with the last two point releases of 6.11. Anyone got any ideas? Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 17815. Increase nchan_max_reserved_memory. Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: *1023248 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /disks Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [crit] 8515#8515: ngx_slab_alloc() failed: no memory Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: shpool alloc failed Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 22486. Increase nchan_max_reserved_memory. Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: *1023263 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:30 Truffle nginx: 2023/07/19 08:53:30 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /devices Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [crit] 8515#8515: ngx_slab_alloc() failed: no memory Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: shpool alloc failed Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 14756. Increase nchan_max_reserved_memory. Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: *1023268 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/update2?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /update2 Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [crit] 8515#8515: ngx_slab_alloc() failed: no memory Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: shpool alloc failed Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 17815. Increase nchan_max_reserved_memory. Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: *1023269 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /disks Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [crit] 8515#8515: ngx_slab_alloc() failed: no memory Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: shpool alloc failed Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: nchan: Out of shared memory while allocating message of size 22490. Increase nchan_max_reserved_memory. Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: *1023272 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" Jul 19 08:53:31 Truffle nginx: 2023/07/19 08:53:31 [error] 8515#8515: MEMSTORE:00: can't create shared message for channel /devices
  3. I know the drill. No idea why I didn't do that. That said, I was able to get things working by setting a New Config. I'm now working to remove the offending drives.
  4. I'm thinking I can just assign the drives to their respective places. Then do a New Config, and preserve all assignments. Will that work? After that, I'll certainly run a parity check.
  5. Wait, they're back but show unassigned. How do I figure out what slot they go in? edit: Okay, I've verified that they all have the proper data on them. And I found an old diag that shows their position. But when I go to add them to the array, they show a blue icon - which makes me think Unraid thinks they're new drives.
  6. Assuming these drives are bad, is there a way to replace them with a single 16TB? If parity does emulation, can I move the contents of those drives?
  7. As above. Seems odd that two drives would die at EXACTLY the same time, but of course its possible. What should be my course of action here?
  8. I had to stop using this container a while back due to poor wifi performance. Any chance this may have been the cause? I'd really rather run it in Docker than my utility VM...
  9. I'm now seeing uncorrectable errors on this pool during a scrub. I assume that means I need to reformat. As mentioned above, do I just need to click 'erase'?
  10. I'm an idiot. Thanks! edit: If running out of space is bad, any reason there isn't something other than 0 that goes in there by default?
  11. I promised that I looked before asking, but I don't see it.
  12. I have the minimum free space setup for the share (set to 100GB, larger than anything I think I'd write to it), but is there a pool specific setting that I need as well?
  13. The pool (cache_media) shows that its 1.3TB, so its currently using the whole thing. I mean, I think. But maybe I screwed something up when I attempted to recreate it? Also, is the pool not able to recover on its own from filling up? This pool is a landing place for new media. My understanding was that if it fills up, things would just write to the array until space opened up, and that eventually the pool would recover on its own. Is that not the case?
  14. I've tried deleting the pool, starting the array, stopping, and readding the cache. But that doesn't seem to reformat things. How can I do that? Or is there a better solution to fix my read only problem? truffle-diagnostics-20230613-0852.zip
  15. Thanks. I sorta figured that, but was hoping there was an easier way.
  16. And what if its not a redundant pool? Do I just pull everything off, delete the pool, replace the drive, then recreate it?
  17. I have an SSD that appears to be failing. I'd like to remove it from the pool before it becomes an issue. How do I clear the offending drive to remove it?
  18. That really only explains how to retain recordings. I'm asking if there's an easy way to scroll through a days worth of videos at once. My example was Nest, cause their app is awesome.
  19. No, I would like to be able to scroll through the last 24 hours of a camera feed. Is that easily possible?
  20. Is there an easy way to view the 24/7 recordings? Preferably something inside of Home Assistant (or maybe Frigate itself)? I'm looking to replace the ability that my Nest cameras have easily scroll back to a specific date and time. Or do I need external software for this?
  21. I had a very odd problem happen this weekend (of course while I was traveling). I went to reboot the Unraid server and it didn't come back up. When I got home, I was able to determine that Unraid would only boot sometimes (possibly only when I came out of the BIOS config). When I got back home to troubleshoot, I tried multiple USB ports. I also upgraded the BIOS. I also removed a recently added M.2 device (Google Coral). While it seems to somehow be 'fixed' now, the only explanation I can come up with is maybe the USB drive is dying? Anyone got any better ideas on what went wrong? Drives are so cheap that I guess I'll just replace it, but is this something that I should just be doing every couple of years or so?
  22. I just setup Frigate12rc2. Are there any config changes for 12? I can't seem to find any 12 specific documentation.
  23. If you read upthread, I think someone suggested attaching a monitor to it so you can see the error on screen.
  24. Clearly I did not. 🤣 I just followed SI1's video and went about my business. Obviously the wrong way to do it. Thank you.
×
×
  • Create New...