wayner

Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by wayner

  1. I think your first question is more of a generic Handbrake question and doesn't relate to the docker. There are probably other places that are WAY better to answer that question.
  2. Thanks. One of my main uses of unRAID is as a media server which will be accessing lots of metadata and Fanart files. Is there more documentation around pros and cons of various disk formats? Like what format should you use for your cache disk? BTRFS, ZFS, XFS? What are the pros and cons of each?
  3. No one has any thoughts on this? Is there any reason why you can't set the ZFS cache size in the unRAID GUI? Is it because it can only be resized on a reboot? Or it is a new feature an just hasn't yet been implemented?
  4. It seems like unRAID uses 1/8 of your RAM as the size of the cache for ZFS disks. You can change this, but not currently through the GUI - you add a config file. Has anyone done research into determining the optimal cache size? I recently increased my server RAM from 32GB to 64TB and my ZFS cache size went up to 8GB. But I rarely use all of my RAM. Should I increase the ZFS cache size? In what instances does it make sense to have a larger or smaller cache size? I did search but didn't seem any documentation on this. There is a thread that is 1.5 years ago, but it is unclear to me whether this all applies as this predates unRAID official support for ZFS.
  5. You and at least a few dozen others. I am one of them.
  6. At least a few dozen people have had issues with BTRFS corruption on their cache driver under 6.12. I solved my problem by reformatting the cache drive to ZFS.
  7. This appears to happen to many people especially if a browser window is kept open. See:
  8. I have this problem cropping up on a regular basis. It seems like it may occur when there are stale browsers open, but is that the expected behvaiour? In other words, is this a bug or a feature? And if it is a bug, which it appears to be, is it being addressed?
  9. There is talk in this thread about the "official" Jellyfin docker. Where is that? When I look in apps in unRAID I see the binhex, linuxserver, and hotio dockers. But I don't see an official one.
  10. Is this docker supposed to stop itself? I am using it, and it seems to be working ok as I am using custom icons on a couple of VMs. But I noticed that the docker was stopped, so I restarted it. But then after a few minutes it was stopped again. Here is the log: Cloning into 'unraid_vm_icons'... Clear all icons not set......continuing. created directory: '/config/icons' I have created the icon store directory & now will start downloading selected icons . . icons downloaded icons synced Clear all icons not set......continuing. . . Icons downloaded previously. icons synced Clear all icons not set......continuing. . . Icons downloaded previously. icons synced Clear all icons not set......continuing. . . Icons downloaded previously. icons synced ** Press ANY KEY to close this window **
  11. Thanks, I added the memory and all looks well. I didn't bother with a memtest for now.
  12. My server currently has 2x16GB. I am adding two more of the same memory sticks. I don't have to do anything in unRAID, do I? It's just a case of shutting down the system, installing the sticks, and starting back up, is it not? The reason that I am doing this is that I am running more VMs and that is starting to use up all of my RAM.
  13. Thanks, I have considered that but I find it slightly annoying that the ethernet ports on the UXG-Lite are 1Gig. That would be fine for me for now, but I would prefer faster to futureproof.
  14. Sorry, I was using the wrong terminology. The current setting is Cache->Array. I did that to clear off my cache disk to reformat it, and did the same with other shares like appdata, system, etc. But I guess I forgot to change the setting back to Array->Cache for the domains share.
  15. I have two 500GB NVME SSDs in a mirrored ZFS pool for my cache and several 4GB or larger spinning hard drives in my array. I would think that the NVME would be faster.
  16. I have two VMs - Ubuntu and Win11. A few weeks ago I had mover move the files off my cache to the pool to reformat my cache drive to ZFS and add a second disk to the pool in a mirrored configuration. I just noticed that my domains share is mainly in the pool - when I look at the config for the domains share I forgot to change the mover action back to Pool->Cache, it is at Cache->Pool. But my VMs seem to be working ok and don't seem to be much slower, although I haven't used them a lot. I tried googling and don't see a clear answer - what is the reason to have domains on cache? Is it because a VM will run a lot faster on my cache which is a NVME disk, compared to the pool which are regular hard drives?
  17. On a related note - Ubiquiti had a security "event" yesterday where users who logged into the cloud for their network controller or NVR were seeing video and or able to config the networks of other users. That certainly gives one more incentive to keep running the controller locally rather than through the cloud. https://arstechnica.com/security/2023/12/unifi-devices-broadcasted-private-video-to-other-users-accounts/
  18. What's the difference in building unRAID docker images vs images for other platforms? And what other platforms use dockers? Plain vanilla Linux OSes like Ubuntu? I know that you can now use Docker under Windows, but that seems to have a bunch of limitations.
  19. What is LSIO? I have never heard of them, other than for building unRAID dockers and I had no idea that they did anything other than build dockers for unRAID.
  20. I didn't mean any offense, and I appreciate your help. But it just looks a bit painful compared to today where pretty much all you need to do is click on the docker that you want to install.
  21. I have been using unRAID for over five years and I don't mind getting my hands dirty but when I look at bmartino1's post here it looks as gross AF! Maybe it is not as bad as it looks, but it is going to scare a lot of people away.
  22. These errors have reappeared on my system after a few months without them. Yesterday I switched my cache disk from XFS to ZFS and added a second cache drive to the pool in mirrored mode, and I added a Win11VM. Overnight I started getting these errors again - my log has hundreds or thousands of these entries. Any idea what is causing these? I saw a theory in the past that keeping a web browser tab to your server is a potential issue. This is a snippet of my log: Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 309. Increase nchan_max_reserved_memory. Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132809 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/parity?buffer_length=1 HTTP/1.1", host: "localhost" Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /parity Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory. Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132810 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/paritymonitor?buffer_length=1 HTTP/1.1", host: "localhost" Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /paritymonitor Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 237. Increase nchan_max_reserved_memory. Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132811 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/fsState?buffer_length=1 HTTP/1.1", host: "localhost" Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /fsState Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory. Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132812 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/mymonitor?buffer_length=1 HTTP/1.1", host: "localhost" Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /mymonitor Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 3688. Increase nchan_max_reserved_memory. Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132813 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/var?buffer_length=1 HTTP/1.1", host: "localhost" Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /var Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 14281. Increase nchan_max_reserved_memory. Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132814 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /disks Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 316. Increase nchan_max_reserved_memory. Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132816 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/wireguard?buffer_length=1 HTTP/1.1", host: "localhost" Dec 3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /wireguard
  23. To answer my own question here, in case anyone else comes across this: Stop the array. Go to unRAID settings, Network Settings and change Bridging to Yes. In your Win11 VM setup change the network source to br0. When you restart you should be on your LAN..
  24. I have a 192.168.1.X subnetwork on my LAN. I installed a Win11 VM on my unRAID server and it has an IP address of 192.168.122.43. This Win VM can see my LAN, as in I can ping 192.168.1.1, etc. But other devices on my LAN cannot see the Win VM at 192.168.122.43. How do I fix this as I would like to use RDP to connect to the VM rather than VNC.
  25. Since no one gave me any advice, I used mover to move all files back to the array again. I erased the drive. Then I changed the RAID type to mirror and restarted the array and used mover to move the system and appdata shares back to the cache. So I am now working properly with a mirrored two disk ZFS cache pool.