Jump to content
steini84

ZFS plugin for unRAID

221 posts in this topic Last Reply

Recommended Posts

Hi. I ran into some troubles yesterday since there is not a kernel headers package available for this version of the kernel (that I found) will try something today and hopefully I’ll figure it out quickly


Sent from my iPhone using Tapatalk

Share this post


Link to post
21 minutes ago, steini84 said:

Hi. I ran into some troubles yesterday since there is not a kernel headers package available for this version of the kernel (that I found) will try something today and hopefully I’ll figure it out quickly emoji123.png


Sent from my iPhone using Tapatalk

:D Thx very much

Share this post


Link to post
2 hours ago, steini84 said:

Hi. I ran into some troubles yesterday since there is not a kernel headers package available for this version of the kernel (that I found) will try something today and hopefully I’ll figure it out quickly emoji123.png


Sent from my iPhone using Tapatalk

 

Ping @CHBMB and ask where he got it from for the dvb plugin. 

Share this post


Link to post
1 hour ago, steini84 said:

Updated for 6.6.0

Seems like you've sorted it,  but you don't need specific kernel headers for the kernel you're compiling afaik.  Just use whatever version is in slackware64-current.

Share this post


Link to post
Seems like you've sorted it,  but you don't need specific kernel headers for the kernel you're compiling afaik.  Just use whatever version is in slackware64-current.

Yeah a mistake on my part, It was a dependency problem that was not connected at all with headers.


Sent from my iPhone using Tapatalk

Share this post


Link to post

How can I run "zfs status -x" to see the status of the pool(s)? I swear this worked at one point, but now it throws an error that it doesn't know what "status" is as a command:

 

root@NAS01:~# zfs status -x
unrecognized command 'status'
usage: zfs command args ...
where 'command' is one of the following:

        create [-p] [-o property=value] ... <filesystem>
        create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
        destroy [-fnpRrv] <filesystem|volume>
        destroy [-dnpRrv] <filesystem|volume>@<snap>[%<snap>][,...]
        destroy <filesystem|volume>#<bookmark>

        snapshot|snap [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
        rollback [-rRf] <snapshot>
        clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
        promote <clone-filesystem>
        rename [-f] <filesystem|volume|snapshot> <filesystem|volume|snapshot>
        rename [-f] -p <filesystem|volume> <filesystem|volume>
        rename -r <snapshot> <snapshot>
        bookmark <snapshot> <bookmark>

        list [-Hp] [-r|-d max] [-o property[,...]] [-s property]...
            [-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...

        set <property=value> ... <filesystem|volume|snapshot> ...
        get [-rHp] [-d max] [-o "all" | field[,...]]
            [-t type[,...]] [-s source[,...]]
            <"all" | property[,...]> [filesystem|volume|snapshot|bookmark] ...
        inherit [-rS] <property> <filesystem|volume|snapshot> ...
        upgrade [-v]
        upgrade [-r] [-V version] <-a | filesystem ...>
        userspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot>
        groupspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot>

        mount
        mount [-vO] [-o opts] <-a | filesystem>
        unmount [-f] <-a | filesystem|mountpoint>
        share <-a [nfs|smb] | filesystem>
        unshare <-a [nfs|smb] | filesystem|mountpoint>

        send [-DnPpRvLec] [-[i|I] snapshot] <snapshot>
        send [-Lec] [-i snapshot|bookmark] <filesystem|volume|snapshot>
        send [-nvPe] -t <receive_resume_token>
        receive [-vnsFu] [-o <property>=<value>] ... [-x <property>] ...
            <filesystem|volume|snapshot>
        receive [-vnsFu] [-o <property>=<value>] ... [-x <property>] ...
            [-d | -e] <filesystem>
        receive -A <filesystem|volume>

        allow <filesystem|volume>
        allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
            <filesystem|volume>
        allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
        allow -c <perm|@setname>[,...] <filesystem|volume>
        allow -s @setname <perm|@setname>[,...] <filesystem|volume>

        unallow [-rldug] <"everyone"|user|group>[,...]
            [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

        hold [-r] <tag> <snapshot> ...
        holds [-r] <snapshot> ...
        release [-r] <tag> <snapshot> ...
        diff [-FHt] <snapshot> [snapshot|filesystem]

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow

 

Thanks!

Share this post


Link to post
How can I run "zfs status -x" to see the status of the pool(s)? I swear this worked at one point, but now it throws an error that it doesn't know what "status" is as a command:
 
root@NAS01:~# zfs status -xunrecognized command 'status'usage: zfs command args ...where 'command' is one of the following:       create [-p] [-o property=value] ...        create [-ps] [-b blocksize] [-o property=value] ... -V        destroy [-fnpRrv]        destroy [-dnpRrv] @[%][,...]       destroy #       snapshot|snap [-r] [-o property=value] ... @ ...       rollback [-rRf]        clone [-p] [-o property=value] ...        promote        rename [-f]        rename [-f] -p        rename -r        bookmark        list [-Hp] [-r|-d max] [-o property[,...]] [-s property]...           [-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...       set  ...  ...       get [-rHp] [-d max] [-o "all" | field[,...]]           [-t type[,...]] [-s source[,...]][filesystem|volume|snapshot|bookmark] ...       inherit [-rS]  ...       upgrade [-v]       upgrade [-r] [-V version]        userspace [-Hinp] [-o field[,...]] [-s field] ...           [-S field] ... [-t type[,...]]        groupspace [-Hinp] [-o field[,...]] [-s field] ...           [-S field] ... [-t type[,...]]        mount       mount [-vO] [-o opts]        unmount [-f]        share        unshare        send [-DnPpRvLec] [-[i|I] snapshot]        send [-Lec] [-i snapshot|bookmark]        send [-nvPe] -t        receive [-vnsFu] [-o =] ... [-x ] ...       receive [-vnsFu] [-o =] ... [-x ] ...           [-d | -e]        receive -A        allow        allow [-ldug] [,...] [,...]       allow [-ld] -e [,...]        allow -c [,...]        allow -s @setname [,...]        unallow [-rldug] [,...]           [[,...]]        unallow [-rld] -e [[,...]]        unallow [-r] -c [[,...]]        unallow [-r] -s @setname [[,...]]        hold [-r]  ...       holds [-r]  ...       release [-r]  ...       diff [-FHt]  [snapshot|filesystem]Each dataset is of the form: pool/[dataset/]*dataset[@name]For the property list, run: zfs set|getFor the delegated permission list, run: zfs allow|unallow

 
Thanks!


zpool status -x :)


Sent from my iPhone using Tapatalk

Share this post


Link to post

update for 6.7.0 soon? Doesn't look like there has been a ZOL update since 0.7.12 in Nov :(  

Share this post


Link to post
update for 6.7.0 soon? Doesn't look like there has been a ZOL update since 0.7.12 in Nov   


I was going to wait for 6.7 final, but I can build for the RC if there is a need for that.

Pa 6.6.7 works fine since it was just a update for docker that did not change the kernel


Sent from my iPhone using Tapatalk

Share this post


Link to post

Hi guys, need some help.

I set up zfs-zed using this post 

Zed is working fine, but now my syslog full of this:

 

Mar 6 07:41:54 unRaid zed[3601]: Invoking "all-syslog.sh" eid=82 pid=25649
Mar 6 07:41:54 unRaid zed: eid=82 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 07:41:54 unRaid zed[3601]: Finished "all-syslog.sh" eid=82 pid=25649 exit=0
Mar 6 07:46:57 unRaid zed[3601]: Invoking "all-syslog.sh" eid=83 pid=29036
Mar 6 07:46:57 unRaid zed: eid=83 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 07:46:57 unRaid zed[3601]: Finished "all-syslog.sh" eid=83 pid=29036 exit=0
Mar 6 07:52:00 unRaid zed[3601]: Invoking "all-syslog.sh" eid=84 pid=32653
Mar 6 07:52:00 unRaid zed: eid=84 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 07:52:00 unRaid zed[3601]: Finished "all-syslog.sh" eid=84 pid=32653 exit=0
Mar 6 07:57:03 unRaid zed[3601]: Invoking "all-syslog.sh" eid=85 pid=4478
Mar 6 07:57:03 unRaid zed: eid=85 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 07:57:03 unRaid zed[3601]: Finished "all-syslog.sh" eid=85 pid=4478 exit=0
Mar 6 08:02:05 unRaid zed[3601]: Invoking "all-syslog.sh" eid=86 pid=8041
Mar 6 08:02:05 unRaid zed: eid=86 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 08:02:05 unRaid zed[3601]: Finished "all-syslog.sh" eid=86 pid=8041 exit=0
Mar 6 08:07:08 unRaid zed[3601]: Invoking "all-syslog.sh" eid=87 pid=10773
Mar 6 08:07:08 unRaid zed: eid=87 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 08:07:08 unRaid zed[3601]: Finished "all-syslog.sh" eid=87 pid=10773 exit=0
Mar 6 08:12:11 unRaid zed[3601]: Invoking "all-syslog.sh" eid=88 pid=13372
Mar 6 08:12:11 unRaid zed: eid=88 class=config_sync pool_guid=0xD30AFB4571F3B450 
Mar 6 08:12:11 unRaid zed[3601]: Finished "all-syslog.sh" eid=88 pid=13372 exit=0

 

Please help to stop zed spam syslog.

Share this post


Link to post

Hi there!

 

I took the plunge and upgraded to 6.7.0 Stable today, and the plugin promptly let me know that the kernel was unsupported.

 

Do we have an update in the works soon?

Thank you so much for all you've done already!

Share this post


Link to post
Hi there!
 
I took the plunge and upgraded to 6.7.0 Stable today, and the plugin promptly let me know that the kernel was unsupported.
 
Do we have an update in the works soon?

Thank you so much for all you've done already!

Yeah I will push a update in the next few days.


Sent from my iPhone using Tapatalk

Share this post


Link to post
10 hours ago, steini84 said:


Yeah I will push a update in the next few days.


Sent from my iPhone using Tapatalk

Awesome, thanks much!

 

I did start running through, last night, updating the build.sh with updated package pointers, and started to build it myself off a fork.

 

I stopped once I hit an error with Bison not being installed, as it stopped the make process midway, and I didn't have time to finish wrapping up.

 

I was trying to self document the process as I went, but do you know of any other gotchas or requirements that might pop up as I go along in the future?

Share this post


Link to post

Finally updated for 6.7.0 - sorry for the delay, but I was traveling with little access to a computer.

 

@voltaic there is always something that comes up, but usually it´s updating the links and then finding new dependencies that might are needed with the updates. I don´t have a set way of doing it, I see it more like a puzzle that needs to be solved.

Share this post


Link to post
Posted (edited)

Updated for 6.7.1 

 

Also the plugin now uses zfs 0.8.1 (0.8.0 for 6.7.0) which has some great updates: ZFS-On-Linux-0.8-Released

 

To upgrade a pool use, but be aware that you cannot go back

zpool upgrade POOLNAME

 

Edited by steini84

Share this post


Link to post

Has anyone played with zvols here?

 

I've been toying with them and using them for hard disks in virtual machines, however, on reboot of unraid, they don't populate /dev/zvol/<poolname> like it should. It will show up properly when you first create the zvol, but not after reboot.

 

#zfs create -V <size M/G/T> -s <poolname>/<zvolname>

 

then run a ls -lh /dev/zvol/<poolname>

 

it will show the name of your zvol there. This is to use for assigning it to a vm. However on reboot, that directory doesn't get created/populated.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.