Jump to content

mosaati

Members
  • Posts

    12
  • Joined

Posts posted by mosaati

  1. Hi

     

    Sorry to bring this up. But I just applied this plugin and did the same config you demonstrated. It is working fine for now but I just keep receiving email notifications with the following

    "could not find any snapshots to destroy; check snapshot names.
    could not remove SSD@autosnap_2021-04-19_13:15:01_frequently : 256 at /usr/local/sbin/sanoid line 343."

     

    Do you know what to do to fix this? 

  2. 13 minutes ago, fxhe said:

    HOW?

    Did you upgrade to 6.9.1 from lower version or install a new one?

    A new installation. 

    I'm just guessing here, if you are upgrading, maybe you should uninstall the old plugin, reboot and reinstall it back? 

  3. 14 minutes ago, fxhe said:

    My upgrade is overdone

    I upgraded to 6.9.1 and the ZFS plugin does not be supported.

    What's your suggestion? Wait or degrade?

    I'm not sure if I follow. I have just installed 6.9.1 on a new server and the ZFS plugin is working fine. 

  4. 7 minutes ago, BasWeg said:

    No, you need to execute the command for every dataset inside the pool.

     

    https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=917605

     

    I'm using following script in UserScripts to be executed at "First Start Array only"

    
    #!/bin/bash
    #from testdasi  (https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=875342)
    #echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab
    
    #just dump everything in
    for n in $(zfs list -H -o name)
    do
      echo "$n /mnt/$n zfs rw,default 0 0" >> /etc/mtab
    done

     

    Jus to make sure. I don't have to edit this script, copy and paste, right?

  5. On 7/13/2020 at 11:10 AM, testdasi said:

    Figured it out. No need to mount through /etc/fstab.

     

    What's missing are entries in /etc/mtab,  which are created if mounted from fstab.

    So a few echo into /etc/mtab is the solution. Just need to do this at boot.

    Each filesystem that is accessible by smb (even through symlinks) needs a line in mtab to stop the spurious warning spam.

    
    echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab

     

     

    Sorry to bring this back. I just noticed this in my log. I Have SSD and HDD pools mounted to /mnt/SSD and /mnt/HDD.

     

    Just to be sure. My commands will be like this?

    echo "SSD /mnt/SSD/ zfs rw,default 0 0" >> /etc/mtab
    echo "HDD /mnt/HDD/ zfs rw,default 0 0" >> /etc/mtab

     

  6. 22 minutes ago, mattie112 said:

    I don't think this option exists in NPM but you can inject your own nginx config so perhaps you can do it manually?

    http://nginx.org/en/docs/http/load_balancing.html

    Thanks for the reply.

     

    Yeah it's not available out of the box yet. I tried different combinations in the advanced tab but nothing worked so far. I thought I might ask maybe someone had tried and found the correct syntax.

  7. Hi everyone 

     

    I have been using NPM for a while it's running great for when I specifically assign a domain per IP:PORT.

     

    One of the features I was looking for but couldn't find or don't know how to do on NPM yet, is when I have more than one server or IP that I want to balance them. 

     

    To explain, lets say 192.168.1.2, 192.168.1.3 and 192.168.1.4 are a kubernetes group with kub.example.com domain and any kub.example.com and *.kub.example.com requests should be forwarded to the cluster.

     

    Does anyone know how? 

     

    Thanks in advance. 

  8. Hi everyone 

     

    I have been using NPM for a while it's running great for when I specifically assign a domain per IP:PORT.

     

    One of the features I was looking for but couldn't find or don't know how to do is when I have more than one server or IP that I want to balance them. 

     

    To explain, lets say 192.168.1.2, 192.168.1.3 and 192.168.1.4 are a kubernetes group with kub.example.com domain and any kub.example.com and *.kub.example.com requests should be forwarded to the cluster.

     

     

    Does anyone know how? 

     

    Thanks in advance. 

×
×
  • Create New...