Jump to content
steini84

ZFS plugin for unRAID

169 posts in this topic Last Reply

Recommended Posts

Hi. I ran into some troubles yesterday since there is not a kernel headers package available for this version of the kernel (that I found) will try something today and hopefully I’ll figure it out quickly


Sent from my iPhone using Tapatalk

Share this post


Link to post
21 minutes ago, steini84 said:

Hi. I ran into some troubles yesterday since there is not a kernel headers package available for this version of the kernel (that I found) will try something today and hopefully I’ll figure it out quickly emoji123.png


Sent from my iPhone using Tapatalk

:D Thx very much

Share this post


Link to post
2 hours ago, steini84 said:

Hi. I ran into some troubles yesterday since there is not a kernel headers package available for this version of the kernel (that I found) will try something today and hopefully I’ll figure it out quickly emoji123.png


Sent from my iPhone using Tapatalk

 

Ping @CHBMB and ask where he got it from for the dvb plugin. 

Share this post


Link to post
1 hour ago, steini84 said:

Updated for 6.6.0

Seems like you've sorted it,  but you don't need specific kernel headers for the kernel you're compiling afaik.  Just use whatever version is in slackware64-current.

  • Like 1

Share this post


Link to post
Seems like you've sorted it,  but you don't need specific kernel headers for the kernel you're compiling afaik.  Just use whatever version is in slackware64-current.

Yeah a mistake on my part, It was a dependency problem that was not connected at all with headers.


Sent from my iPhone using Tapatalk

Share this post


Link to post

Forgot to post, but updated to 6.6.3


Sent from my iPhone using Tapatalk

Share this post


Link to post

How can I run "zfs status -x" to see the status of the pool(s)? I swear this worked at one point, but now it throws an error that it doesn't know what "status" is as a command:

 

root@NAS01:~# zfs status -x
unrecognized command 'status'
usage: zfs command args ...
where 'command' is one of the following:

        create [-p] [-o property=value] ... <filesystem>
        create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
        destroy [-fnpRrv] <filesystem|volume>
        destroy [-dnpRrv] <filesystem|volume>@<snap>[%<snap>][,...]
        destroy <filesystem|volume>#<bookmark>

        snapshot|snap [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
        rollback [-rRf] <snapshot>
        clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
        promote <clone-filesystem>
        rename [-f] <filesystem|volume|snapshot> <filesystem|volume|snapshot>
        rename [-f] -p <filesystem|volume> <filesystem|volume>
        rename -r <snapshot> <snapshot>
        bookmark <snapshot> <bookmark>

        list [-Hp] [-r|-d max] [-o property[,...]] [-s property]...
            [-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...

        set <property=value> ... <filesystem|volume|snapshot> ...
        get [-rHp] [-d max] [-o "all" | field[,...]]
            [-t type[,...]] [-s source[,...]]
            <"all" | property[,...]> [filesystem|volume|snapshot|bookmark] ...
        inherit [-rS] <property> <filesystem|volume|snapshot> ...
        upgrade [-v]
        upgrade [-r] [-V version] <-a | filesystem ...>
        userspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot>
        groupspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot>

        mount
        mount [-vO] [-o opts] <-a | filesystem>
        unmount [-f] <-a | filesystem|mountpoint>
        share <-a [nfs|smb] | filesystem>
        unshare <-a [nfs|smb] | filesystem|mountpoint>

        send [-DnPpRvLec] [-[i|I] snapshot] <snapshot>
        send [-Lec] [-i snapshot|bookmark] <filesystem|volume|snapshot>
        send [-nvPe] -t <receive_resume_token>
        receive [-vnsFu] [-o <property>=<value>] ... [-x <property>] ...
            <filesystem|volume|snapshot>
        receive [-vnsFu] [-o <property>=<value>] ... [-x <property>] ...
            [-d | -e] <filesystem>
        receive -A <filesystem|volume>

        allow <filesystem|volume>
        allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
            <filesystem|volume>
        allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
        allow -c <perm|@setname>[,...] <filesystem|volume>
        allow -s @setname <perm|@setname>[,...] <filesystem|volume>

        unallow [-rldug] <"everyone"|user|group>[,...]
            [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

        hold [-r] <tag> <snapshot> ...
        holds [-r] <snapshot> ...
        release [-r] <tag> <snapshot> ...
        diff [-FHt] <snapshot> [snapshot|filesystem]

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow

 

Thanks!

Share this post


Link to post
How can I run "zfs status -x" to see the status of the pool(s)? I swear this worked at one point, but now it throws an error that it doesn't know what "status" is as a command:
 
root@NAS01:~# zfs status -xunrecognized command 'status'usage: zfs command args ...where 'command' is one of the following:       create [-p] [-o property=value] ...        create [-ps] [-b blocksize] [-o property=value] ... -V        destroy [-fnpRrv]        destroy [-dnpRrv] @[%][,...]       destroy #       snapshot|snap [-r] [-o property=value] ... @ ...       rollback [-rRf]        clone [-p] [-o property=value] ...        promote        rename [-f]        rename [-f] -p        rename -r        bookmark        list [-Hp] [-r|-d max] [-o property[,...]] [-s property]...           [-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...       set  ...  ...       get [-rHp] [-d max] [-o "all" | field[,...]]           [-t type[,...]] [-s source[,...]][filesystem|volume|snapshot|bookmark] ...       inherit [-rS]  ...       upgrade [-v]       upgrade [-r] [-V version]        userspace [-Hinp] [-o field[,...]] [-s field] ...           [-S field] ... [-t type[,...]]        groupspace [-Hinp] [-o field[,...]] [-s field] ...           [-S field] ... [-t type[,...]]        mount       mount [-vO] [-o opts]        unmount [-f]        share        unshare        send [-DnPpRvLec] [-[i|I] snapshot]        send [-Lec] [-i snapshot|bookmark]        send [-nvPe] -t        receive [-vnsFu] [-o =] ... [-x ] ...       receive [-vnsFu] [-o =] ... [-x ] ...           [-d | -e]        receive -A        allow        allow [-ldug] [,...] [,...]       allow [-ld] -e [,...]        allow -c [,...]        allow -s @setname [,...]        unallow [-rldug] [,...]           [[,...]]        unallow [-rld] -e [[,...]]        unallow [-r] -c [[,...]]        unallow [-r] -s @setname [[,...]]        hold [-r]  ...       holds [-r]  ...       release [-r]  ...       diff [-FHt]  [snapshot|filesystem]Each dataset is of the form: pool/[dataset/]*dataset[@name]For the property list, run: zfs set|getFor the delegated permission list, run: zfs allow|unallow

 
Thanks!


zpool status -x :)


Sent from my iPhone using Tapatalk

Share this post


Link to post

update for 6.7.0 soon? Doesn't look like there has been a ZOL update since 0.7.12 in Nov :(  

Share this post


Link to post
update for 6.7.0 soon? Doesn't look like there has been a ZOL update since 0.7.12 in Nov   


I was going to wait for 6.7 final, but I can build for the RC if there is a need for that.

Pa 6.6.7 works fine since it was just a update for docker that did not change the kernel


Sent from my iPhone using Tapatalk

Share this post


Link to post

Can confirm it's working fine under 6.6.7 :) 

Share this post


Link to post

Hi guys, need some help.

I set up zfs-zed using this post 

Zed is working fine, but now my syslog full of this:

 

Mar 6 07:41:54 unRaid zed[3601]: Invoking "all-syslog.sh" eid=82 pid=25649
Mar 6 07:41:54 unRaid zed: eid=82 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 07:41:54 unRaid zed[3601]: Finished "all-syslog.sh" eid=82 pid=25649 exit=0
Mar 6 07:46:57 unRaid zed[3601]: Invoking "all-syslog.sh" eid=83 pid=29036
Mar 6 07:46:57 unRaid zed: eid=83 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 07:46:57 unRaid zed[3601]: Finished "all-syslog.sh" eid=83 pid=29036 exit=0
Mar 6 07:52:00 unRaid zed[3601]: Invoking "all-syslog.sh" eid=84 pid=32653
Mar 6 07:52:00 unRaid zed: eid=84 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 07:52:00 unRaid zed[3601]: Finished "all-syslog.sh" eid=84 pid=32653 exit=0
Mar 6 07:57:03 unRaid zed[3601]: Invoking "all-syslog.sh" eid=85 pid=4478
Mar 6 07:57:03 unRaid zed: eid=85 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 07:57:03 unRaid zed[3601]: Finished "all-syslog.sh" eid=85 pid=4478 exit=0
Mar 6 08:02:05 unRaid zed[3601]: Invoking "all-syslog.sh" eid=86 pid=8041
Mar 6 08:02:05 unRaid zed: eid=86 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 08:02:05 unRaid zed[3601]: Finished "all-syslog.sh" eid=86 pid=8041 exit=0
Mar 6 08:07:08 unRaid zed[3601]: Invoking "all-syslog.sh" eid=87 pid=10773
Mar 6 08:07:08 unRaid zed: eid=87 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 08:07:08 unRaid zed[3601]: Finished "all-syslog.sh" eid=87 pid=10773 exit=0
Mar 6 08:12:11 unRaid zed[3601]: Invoking "all-syslog.sh" eid=88 pid=13372
Mar 6 08:12:11 unRaid zed: eid=88 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 08:12:11 unRaid zed[3601]: Finished "all-syslog.sh" eid=88 pid=13372 exit=0

 

Please help to stop zed spam syslog.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now