wayner

Members
  • Posts

    536
  • Joined

  • Last visited

Everything posted by wayner

  1. I have to think that dockers, VMs, etc, will run much more quickly from an SSD cache drive than a spinning hard drive. Isn't that the purpose of the cache?
  2. What's the procedure for adding a second cache drive? I already have one formatted as ZFS and want to add a second one as ZFS. Do I just follow this (https://docs.unraid.net/unraid-os/manual/storage-management/#adding-disks-to-a-pool) but change BTRFS to ZFS?
  3. I don't think that some of the docs have been fully updated for 6.12, especially since you can use ZFS for cache pools and the docs don't reflect that. For example: And:
  4. I removed the unraid Connect Plugin and that fixed this problem for me - none of these errors in 10 days.
  5. Now at 3.5 days with running out of shared memory issues, so it looks like this was what was causing my issue.
  6. Thanks, I will try that. Is this something that just happens in 6.12? I never saw this under 6.11.5. I upgraded a few weeks ago and I keep running into new bugs/issues in 6.12, BTRFS cache corruption, call trace issues, and now this out of memory issue.
  7. And I think that this thread is about the same issue:
  8. I have tried closing all of the unRAID web UI sessions in browsers to see if this causes the problem to go away. So far, so good, but that is only for a day and a half.
  9. You can now move things around on the dashboard more than you used to, but can you have 3 or more columns? I have lots of wasted space - but have to scroll down to see some parts of the dashboard. I don't need some of these items to be have a monitor screen wide. For example, the Motherboard segment - way too wide. In the processor segment the load meters could be much narrower. The top left server segment also has a ton of whitespace.
  10. Me too. I just saw these - my system was recently upgraded to 6.12.4. I don't remember ever seeing these under 6.11.5, if I did they didn't cause issues as my system could go for months without having issues. Not sure it is causing any major problems in 6.12.4 - at least not yet, other than filling the logs.
  11. I woke up this morning to my logs full of these errors. The system seems to be working fine, and I only have four dockers running. Any idea what is causing these? I did a search of the forums and others have had similar issues, but it isn't clear if they fixed the problem. However it looks like it might be related to having web browser SSH sessions open. Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: shpool alloc failed Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory. Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: *2123245 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/paritymonitor?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /paritymonitor Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: shpool alloc failed Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 237. Increase nchan_max_reserved_memory. Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: *2123246 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/fsState?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /fsState Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: shpool alloc failed Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory. Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: *2123247 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/mymonitor?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:16 Portrush nginx: 2023/09/16 07:58:16 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /mymonitor Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: shpool alloc failed Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 6081. Increase nchan_max_reserved_memory. Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: *2123249 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /devices Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: shpool alloc failed Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory. Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: *2123250 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/arraymonitor?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /arraymonitor Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: shpool alloc failed Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 3687. Increase nchan_max_reserved_memory. Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: *2123253 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/var?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /var Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [crit] 26468#26468: ngx_slab_alloc() failed: no memory Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: shpool alloc failed Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: nchan: Out of shared memory while allocating message of size 13833. Increase nchan_max_reserved_memory. Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: *2123254 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Sep 16 07:58:17 Portrush nginx: 2023/09/16 07:58:17 [error] 26468#26468: MEMSTORE:00: can't create shared message for channel /disks portrush-diagnostics-20230916-1142.zip
  12. The docs say: Can you not also add a disk to a cache pool if the first disk is ZFS?
  13. I have Ubiquiti hardware, including as my router (USG) and I use the Ubiquiti Unifi hardware. So I thought that I needed to also do the second fix as well.
  14. Does this seem similar? A bunch of folks have been seeing these errors with BTRFS corruption on a cache drive. One fix is to use a different file system. I have switched my cache drive to XFS, but I may go to ZFS and mirror the cache drive.
  15. Here is the thread - I meant to put this in my earlier post
  16. A bunch of us have had issues with cache drives using BTRFS after upgrading to 6.12.4. I have changed to XFS on my cache drive, but I will likely go to ZFS so that I can have mirrored cache drives. I reformatted and corruption on my BTRFS cache drive re-occurred.
  17. I wasn't sure exactly what changes to make so I hadn't made any yet. I have now changed to ipvlan, Set Bridging to No, and kept Bonding at Yes. Dockers were already set to allow Custom Networks. My dockers are now using bond0 as the custom network interface. I do not have an option to user eth0. Is that OK? Is the reason that I don't see the eth0 option because Enable Bonding is set to Yes. If I select No then will the Bond0 option disappear and the eth0 option appear? Does it matter what I use for Bonding as the release notes says either work.
  18. Here is my diagnostics file if that helps portrush-diagnostics-20230913-1758.zip
  19. OK thanks. Perhaps FCP is warning me about old errors that are no longer present since I upgraded to 6.12.4? But my system has only been up and running for 16 hours. But FCP gives my five occurrences of "macvlan call traces found" that would indicate errors. But I don't see anything in my log about macvlan call traces found other than those Call Trace issues shown above or entries generated by FCP. And they show up in red.
  20. This is from my log. Are these the macvlan call trace errors? Sep 13 07:57:06 Portrush kernel: CPU: 10 PID: 12285 Comm: rpcd_lsad Tainted: P B D W O 6.1.49-Unraid #1 Sep 13 07:57:06 Portrush kernel: Hardware name: ASUS System Product Name/PRIME B560-PLUS, BIOS 0820 04/27/2021 Sep 13 07:57:06 Portrush kernel: Call Trace:
  21. Thank you. I get to have more fun with Mover!
  22. I have a 500GB cache drive. When I moved from 6.11.5 to 6.12.4 I kept getting BTRFS corruption errors on this cache drive. So I reformatted the cache drive to XFS. I want to now add a second cache drive. Can I have mirroring with my cache with an XFS filesystem? If not, are there options other than BTRFS? Could I use ZFS?
  23. What problems will these macvlan errors cause? I am getting these errors in my logs, but so far I don't know if it has caused any functional problems with my system.
  24. I will do that later tonight when I get home as I don't have access right now to my server. FCP was highlighting these issues in its report.
  25. Thanks. I am already upgraded to 6.12.4, so I just need to follow the other steps. I am getting the macvlan trace errors, but as far as I can determine it has not caused any problems - yet. I have had problems with my cache disk getting corrupted that happened with 6.12.4, but not with 6.11.5, but I have changed the cache disk from BTRFS to XFS, so hopefully that issue is now stable.