wayner

Members
  • Posts

    480
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • Location
    Toronto

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

wayner's Achievements

Enthusiast

Enthusiast (6/14)

16

Reputation

  1. Now at 3.5 days with running out of shared memory issues, so it looks like this was what was causing my issue.
  2. Thanks, I will try that. Is this something that just happens in 6.12? I never saw this under 6.11.5. I upgraded a few weeks ago and I keep running into new bugs/issues in 6.12, BTRFS cache corruption, call trace issues, and now this out of memory issue.
  3. And I think that this thread is about the same issue:
  4. I have tried closing all of the unRAID web UI sessions in browsers to see if this causes the problem to go away. So far, so good, but that is only for a day and a half.
  5. You can now move things around on the dashboard more than you used to, but can you have 3 or more columns? I have lots of wasted space - but have to scroll down to see some parts of the dashboard. I don't need some of these items to be have a monitor screen wide. For example, the Motherboard segment - way too wide. In the processor segment the load meters could be much narrower. The top left server segment also has a ton of whitespace.
  6. Me too. I just saw these - my system was recently upgraded to 6.12.4. I don't remember ever seeing these under 6.11.5, if I did they didn't cause issues as my system could go for months without having issues. Not sure it is causing any major problems in 6.12.4 - at least not yet, other than filling the logs.
  7. I woke up this morning to my logs full of these errors. The system seems to be working fine, and I only have four dockers running. Any idea what is causing these? I did a search of the forums and others have had similar issues, but it isn't clear if they fixed the problem. However it looks like it might be related to having web browser SSH sessions open. Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: shpool alloc failed Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory. Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: *2123245 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/paritymonitor?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /paritymonitor Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: shpool alloc failed Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 237. Increase nchan_max_reserved_memory. Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: *2123246 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/fsState?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /fsState Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: shpool alloc failed Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory. Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: *2123247 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/mymonitor?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /mymonitor Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: shpool alloc failed Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 6081. Increase nchan_max_reserved_memory. Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: *2123249 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /devices Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: shpool alloc failed Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory. Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: *2123250 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/arraymonitor?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /arraymonitor Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: shpool alloc failed Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 3687. Increase nchan_max_reserved_memory. Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: *2123253 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/var?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /var Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: shpool alloc failed Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 13833. Increase nchan_max_reserved_memory. Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: *2123254 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /disks portrush-diagnostics-20230916-1142.zip
  8. The docs say: Can you not also add a disk to a cache pool if the first disk is ZFS?
  9. I have Ubiquiti hardware, including as my router (USG) and I use the Ubiquiti Unifi hardware. So I thought that I needed to also do the second fix as well.
  10. Does this seem similar? A bunch of folks have been seeing these errors with BTRFS corruption on a cache drive. One fix is to use a different file system. I have switched my cache drive to XFS, but I may go to ZFS and mirror the cache drive.
  11. Here is the thread - I meant to put this in my earlier post
  12. A bunch of us have had issues with cache drives using BTRFS after upgrading to 6.12.4. I have changed to XFS on my cache drive, but I will likely go to ZFS so that I can have mirrored cache drives. I reformatted and corruption on my BTRFS cache drive re-occurred.
  13. I wasn't sure exactly what changes to make so I hadn't made any yet. I have now changed to ipvlan, Set Bridging to No, and kept Bonding at Yes. Dockers were already set to allow Custom Networks. My dockers are now using bond0 as the custom network interface. I do not have an option to user eth0. Is that OK? Is the reason that I don't see the eth0 option because Enable Bonding is set to Yes. If I select No then will the Bond0 option disappear and the eth0 option appear? Does it matter what I use for Bonding as the release notes says either work.
  14. Here is my diagnostics file if that helps portrush-diagnostics-20230913-1758.zip
  15. OK thanks. Perhaps FCP is warning me about old errors that are no longer present since I upgraded to 6.12.4? But my system has only been up and running for 16 hours. But FCP gives my five occurrences of "macvlan call traces found" that would indicate errors. But I don't see anything in my log about macvlan call traces found other than those Call Trace issues shown above or entries generated by FCP. And they show up in red.