mosaati
-
Posts
12 -
Joined
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by mosaati
-
-
Hi
Sorry to bring this up. But I just applied this plugin and did the same config you demonstrated. It is working fine for now but I just keep receiving email notifications with the following
"could not find any snapshots to destroy; check snapshot names.
could not remove SSD@autosnap_2021-04-19_13:15:01_frequently : 256 at /usr/local/sbin/sanoid line 343."Do you know what to do to fix this?
-
2 hours ago, fxhe said:
I tried and the output was: Unable to establish SSL connection.
Doesn't that mean you have a network connection problem?
Can you install any other plug-in?
-
13 minutes ago, fxhe said:
HOW?
Did you upgrade to 6.9.1 from lower version or install a new one?
A new installation.
I'm just guessing here, if you are upgrading, maybe you should uninstall the old plugin, reboot and reinstall it back?
-
14 minutes ago, fxhe said:
My upgrade is overdone
I upgraded to 6.9.1 and the ZFS plugin does not be supported.
What's your suggestion? Wait or degrade?
I'm not sure if I follow. I have just installed 6.9.1 on a new server and the ZFS plugin is working fine.
-
3 hours ago, BasWeg said:
yes
Thank you so much. The log stopped generating that warning after applying the script.
Really appreciate your help.
- 1
-
7 minutes ago, BasWeg said:
No, you need to execute the command for every dataset inside the pool.
https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=917605
I'm using following script in UserScripts to be executed at "First Start Array only"
#!/bin/bash #from testdasi (https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=875342) #echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab #just dump everything in for n in $(zfs list -H -o name) do echo "$n /mnt/$n zfs rw,default 0 0" >> /etc/mtab done
Jus to make sure. I don't have to edit this script, copy and paste, right?
-
On 7/13/2020 at 11:10 AM, testdasi said:
Figured it out. No need to mount through /etc/fstab.
What's missing are entries in /etc/mtab, which are created if mounted from fstab.
So a few echo into /etc/mtab is the solution. Just need to do this at boot.
Each filesystem that is accessible by smb (even through symlinks) needs a line in mtab to stop the spurious warning spam.
echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab
Sorry to bring this back. I just noticed this in my log. I Have SSD and HDD pools mounted to /mnt/SSD and /mnt/HDD.
Just to be sure. My commands will be like this?
echo "SSD /mnt/SSD/ zfs rw,default 0 0" >> /etc/mtab echo "HDD /mnt/HDD/ zfs rw,default 0 0" >> /etc/mtab
-
Hi everyone
I'm all new to Unraid and just did a switch from TrueNAS. I'm using the ZFS plug in and it's working great.
Is the modified Unraid kernel with ZFS better? Or just an alternative?
-
22 minutes ago, mattie112 said:
I don't think this option exists in NPM but you can inject your own nginx config so perhaps you can do it manually?
Thanks for the reply.
Yeah it's not available out of the box yet. I tried different combinations in the advanced tab but nothing worked so far. I thought I might ask maybe someone had tried and found the correct syntax.
-
Hi everyone
I have been using NPM for a while it's running great for when I specifically assign a domain per IP:PORT.
One of the features I was looking for but couldn't find or don't know how to do on NPM yet, is when I have more than one server or IP that I want to balance them.
To explain, lets say 192.168.1.2, 192.168.1.3 and 192.168.1.4 are a kubernetes group with kub.example.com domain and any kub.example.com and *.kub.example.com requests should be forwarded to the cluster.
Does anyone know how?
Thanks in advance.
-
Hi everyone
I have been using NPM for a while it's running great for when I specifically assign a domain per IP:PORT.
One of the features I was looking for but couldn't find or don't know how to do is when I have more than one server or IP that I want to balance them.
To explain, lets say 192.168.1.2, 192.168.1.3 and 192.168.1.4 are a kubernetes group with kub.example.com domain and any kub.example.com and *.kub.example.com requests should be forwarded to the cluster.
Does anyone know how?
Thanks in advance.
ZFS plugin for unRAID
in Plugin Support
Posted
Interesting.
I will do some testing on that in a testing environment.