wayner

Members
  • Posts

    535
  • Joined

  • Last visited

Posts posted by wayner

  1. @tivodoctor - You might want to look into getting an HD HomeRun tuner.  I have a couple of these and they work fine in both Plex and Jellyfin.  For the most part I use this with SageTV which is kind of dead now but it was, by far, the best software for recording TV and supported tons of tuners and tuning devices, like IR blasters, etc.

     

    You can get used devices pretty cheap on ebay.  Sometimes the wall warts die so you may need to replace a power supply.

     

    edit - One more good thing about HD Homerun devices is that they are network devices with ethernet connections.  So you don't plug them in to your PC and you don't have to worry about drivers and passthroughs in a docker/VM.  And you can share them with multiple PCs or other devices.

    • Like 1
  2. This docker seems to have updates available on a very regular basis, like every week or so.  But I believe that this is a relatively small and simple docker, there is no UI, it just updates your dynamic IP and writes to the log.

     

    So why are there so many updates to this docker?

  3. Thanks.  One of my main uses of unRAID is as a media server which will be accessing lots of metadata and Fanart files.

     

    Is there more documentation around pros and cons of various disk formats?  Like what format should you use for your cache disk?  BTRFS, ZFS, XFS?  What are the pros and cons of each?

  4. It seems like unRAID uses 1/8 of your RAM as the size of the cache for ZFS disks.  You can change this, but not currently through the GUI - you add a config file.

     

    Has anyone done research into determining the optimal cache size?  I recently increased my server RAM from 32GB to 64TB and my ZFS cache size went up to 8GB.  But I rarely use all of my RAM.  Should I increase the ZFS cache size?  In what instances does it make sense to have a larger or smaller cache size?

     

    I did search but didn't seem any documentation on this.  There is a thread that is 1.5 years ago, but it is unclear to me whether this all applies as this predates unRAID official support for ZFS.  

  5. Is this docker supposed to stop itself?  I am using it, and it seems to be working ok as I am using custom icons on a couple of VMs.  But I noticed that the docker was stopped, so I restarted it.  But then after a few minutes it was stopped again.

     

    Here is the log:

    Cloning into 'unraid_vm_icons'...
      Clear all icons not set......continuing.
    created directory: '/config/icons'
    I have created the icon store directory & now will start downloading selected icons
    .
    .
    icons downloaded
    icons synced
      Clear all icons not set......continuing.
    .
    .
    Icons downloaded previously.
    icons synced
      Clear all icons not set......continuing.
    .
    .
    Icons downloaded previously.
    icons synced
      Clear all icons not set......continuing.
    .
    .
    Icons downloaded previously.
    icons synced
    
    ** Press ANY KEY to close this window ** 

     

  6. My server currently has 2x16GB.  I am adding two more of the same memory sticks.

     

    I don't have to do anything in unRAID, do I?  It's just a case of shutting down the system, installing the sticks, and starting back up, is it not?

     

    The reason that I am doing this is that I am running more VMs and that is starting to use up all of my RAM.

  7. I have two VMs - Ubuntu and Win11.  A few weeks ago I had mover move the files off my cache to the pool to reformat my cache drive to ZFS and add a second disk to the pool in a mirrored configuration.

     

    I just noticed that my domains share is mainly in the pool - when I look at the config for the domains share I forgot to change the mover action back to Pool->Cache, it is at Cache->Pool.  But my VMs seem to be working ok and don't seem to be much slower, although I haven't used them a lot.

     

    I tried googling and don't see a clear answer - what is the reason to have domains on cache?  Is it because a VM will run a lot faster on my cache which is a NVME disk, compared to the pool which are regular hard drives?

  8. On a related note - Ubiquiti had a security "event" yesterday where users who logged into the cloud for their network controller or NVR were seeing video and or able to config the networks of other users.

     

    That certainly gives one more incentive to keep running the controller locally rather than through the cloud.

    https://arstechnica.com/security/2023/12/unifi-devices-broadcasted-private-video-to-other-users-accounts/

    • Like 1
  9. 53 minutes ago, wgstarks said:

    I did another one a little earlier in this thread that may be more to your liking but honestly, I don’t think LSIO really cares much about unRAID users anymore (JUST MY GUESS) and I recommend switching to unifi-controller-reborn. IMHO it’s much easier to install and setup.

    What is LSIO? I have never heard of them, other than for building unRAID dockers and I had no idea that they did anything other than build dockers for unRAID.

  10. These errors have reappeared on my system after a few months without them.

     

    Yesterday I switched my cache disk from XFS to ZFS and added a second cache drive to the pool in mirrored mode, and I added a Win11VM.  Overnight I started getting these errors again - my log has hundreds or thousands of these entries.  Any idea what is causing these?  I saw a theory in the past that keeping a web browser tab to your server is a potential issue.  This is a snippet of my log:

     

    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 309. Increase nchan_max_reserved_memory.
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132809 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/parity?buffer_length=1 HTTP/1.1", host: "localhost"
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /parity
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory.
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132810 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/paritymonitor?buffer_length=1 HTTP/1.1", host: "localhost"
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /paritymonitor
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 237. Increase nchan_max_reserved_memory.
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132811 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/fsState?buffer_length=1 HTTP/1.1", host: "localhost"
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /fsState
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 234. Increase nchan_max_reserved_memory.
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132812 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/mymonitor?buffer_length=1 HTTP/1.1", host: "localhost"
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /mymonitor
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 3688. Increase nchan_max_reserved_memory.
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132813 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/var?buffer_length=1 HTTP/1.1", host: "localhost"
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /var
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 14281. Increase nchan_max_reserved_memory.
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132814 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /disks
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [crit] 3106#3106: ngx_slab_alloc() failed: no memory
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: shpool alloc failed
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: nchan: Out of shared memory while allocating message of size 316. Increase nchan_max_reserved_memory.
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: *3132816 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/wireguard?buffer_length=1 HTTP/1.1", host: "localhost"
    Dec  3 05:15:36 Portrush nginx: 2023/12/03 05:15:36 [error] 3106#3106: MEMSTORE:00: can't create shared message for channel /wireguard