ZFS plugin for unRAID


steini84

Recommended Posts

So 6.8 rc1 is out. I wouldn't normally ask, but given I've just discovered  ZFS on unraid and the whole 6.7 series has performance issues in the array, what are the chances to compile ZFS for 6.8 rc1?
 

Updated to 6.8 rc1 just for you just pm me if I miss the next rc releases and I can compile.


Sent from my iPhone using Tapatalk
Link to comment

Wow fantastic thankyou!  This is like Christmas!

 

I've just installed my first ZFS mirror.  I take it I can just upgrade to the new unraid version and ZFS plugin handles that automatically?

 

I've just done my first 2x8TB HDD's:

# zpool create -f -m /mnt/data data mirror ata-ST8000NM0055-1RM112_ZA170J50 ata-ST8000NM0055-1RM112_ZA1740ET

# zfs set compression=lz4 data

 

Also, I discovered I had to stop the array to do the above otherwise it said,

"the kernel failed to rescan the partition table: 16, cannot label 'sdd': try using parted(8) and then provide a specific slice: -1"

 

Will add my VM ssd later to do send/receive to these drives and snapshots which will be very nice.

 

Does this look OK?

 

Thanks.

Edited by Marshalleq
Link to comment
Wow fantastic thankyou!  This is like Christmas!
 
I've just installed my first ZFS mirror.  I take it I can just upgrade to the new unraid version and ZFS plugin handles that automatically?
 
I've just done my first 2x8TB HDD's:
# zpool create -f -m /mnt/data data mirror ata-ST8000NM0055-1RM112_ZA170J50 ata-ST8000NM0055-1RM112_ZA1740ET
# zfs set compression=lz4 data
 
Also, I discovered I had to stop the array to do the above otherwise it said,
"the kernel failed to rescan the partition table: 16, cannot label 'sdd': try using parted(8) and then provide a specific slice: -1"
 
Will add my VM ssd later to do send/receive to these drives and snapshots which will be very nice.
 
Does this look OK?
 
Thanks.

I try not to give any general zfs advice or support since there are much better forums for that, but yeah that looks fine to me.

Just update the plugin before you upgrade to the RC and then you are good to go.





Sent from my iPhone using Tapatalk
Link to comment
Wow fantastic thankyou!  This is like Christmas!
 
I've just installed my first ZFS mirror.  I take it I can just upgrade to the new unraid version and ZFS plugin handles that automatically?
 
I've just done my first 2x8TB HDD's:
# zpool create -f -m /mnt/data data mirror ata-ST8000NM0055-1RM112_ZA170J50 ata-ST8000NM0055-1RM112_ZA1740ET
# zfs set compression=lz4 data
 
Also, I discovered I had to stop the array to do the above otherwise it said,
"the kernel failed to rescan the partition table: 16, cannot label 'sdd': try using parted(8) and then provide a specific slice: -1"
 
Will add my VM ssd later to do send/receive to these drives and snapshots which will be very nice.
 
Does this look OK?
 
Thanks.

I try not to give any general zfs advice or support since there are much better forums for that, but yeah that looks fine to me.

Just update the plugin before you upgrade to the RC and then you are good to go.





Sent from my iPhone using Tapatalk
  • Thanks 1
Link to comment
7 minutes ago, Marshalleq said:

OK so leaving the ZFS specific questions out - Does anyone do anything special around how it shows up in unassigned devices?  E.g. stub it or something?  Thanks.

Nope , unassigned devices is not involved in zfs other then like any other non array drive its show the drives there. Drives involved in zfs pools do show up as zfs_member in unassigned devices but dont touch them there or use mount button. Its all commandline baby ✌️

  • Like 1
Link to comment

ps no need to stop the array dor any zfs stuff. 

Typicaly i wipe the drives first with a dummy unassigned devices format if they protest when trying to create a pool. 

Could be some remnants of earlier use on them that gets you that message.

But stopping the pool is not needed as not part of the pool.

Be very carfull however to select the right drive ids as to not accidentaly destroy the unraid array drives.

Link to comment
3 minutes ago, glennv said:

very nice gesture bro ✌️✌️✌️✌️✌️✌️✌️

Yes very.

5 minutes ago, glennv said:

ps no need to stop the array dor any zfs stuff. 

Typicaly i wipe the drives first with a dummy unassigned devices format if they protest when trying to create a pool. 

Could be some remnants of earlier use on them that gets you that message.

But stopping the pool is not needed as not part of the pool.

Be very carfull however to select the right drive ids as to not accidentaly destroy the unraid array drives.

Yeah, I did some googling - the consensus was that the drives had remnants of the unraid array and that caused that error.  By fluke I discovered stopping the array resolved it. And I tried it again and the same thing happened - probably have to zero the drives (fdisk didn't work).  They were originally encrypted too - so that could be something.  And I was a reluctantly stuck on 6.6.7 until the performance issues were resolved, so could be that.  But Christmas has definitely come early, 6.8 RC1 and ZFS, I had tried to find a way to do two arrays before - just didn't think it would be the awesome ZFS!

 

Edit: I take it the FREENAS rule of 8GB RAM + 1GB per TB does not apply to this plugin as the 8GB would be mostly covering system stuff in FREENAS?  So just 1GB per TB?

Edited by Marshalleq
Link to comment
1 hour ago, Marshalleq said:

Edit: I take it the FREENAS rule of 8GB RAM + 1GB per TB does not apply to this plugin as the 8GB would be mostly covering system stuff in FREENAS?  So just 1GB per TB?

That's an often quoted rule and doesn't really apply for some time, they now mention 8GB is fine for pools up to around 24TB, unless you use deduplication, and part of that RAM is required by FreeNAS itself, not ZFS, ZFS plugin should run fine with 4GB for the same pool size, but of course ZFS uses RAM for read cache so the more RAM you can throw at it the faster it will be, for data that is frequently used.

  • Like 1
Link to comment

I had some problems with autotrim (high cpu load) so I personally have it disabled for now.

 

But the property is at least written to the pool

root@Tower:~# zpool set autotrim=off test
root@Tower:~# zpool get all test | grep "trim\|NAME"
NAME  PROPERTY                       VALUE                          SOURCE
test  autotrim                       off                            default

 

Link to comment

Hello again all, I've been trying to get autosnapshot working as per page 4 of this post, but have had no end of issues.  It may be the new version of unraid I'm not sure.

 

For example, the update_cron command doesn't seem to actually update cron i.e /etc/cron.anything or crontab -e - I didn't know unraid could support custom cron - I thought that's why the user scripts plugin was made.  Manually running the cron script says that the file it's referencing doesn't exist, even though I can see quite clearly that it does exist.  So I suppose that's due to permissions, which are -rw-------  1 root root 16937 Oct 19 12:17 zfs-auto-snapshot.sh - however I can't actually change these permissions as root doing anything not even chmod 777 works - is this normal?  I've not noticed this issue before, but to be honest I haven't played around in the scripts directory before.

 

On the command line I assume the following is acceptable, ID=zfs-auto-snapshot-weekly /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=02 --keep=8 but I get permission denied which aligns with the above.

 

I'm assuming you're going to tell me I need to fix the permissions issue - but thought I'd check in case I'm missing something obvious.

 

I see 6.8 RC3 is already out which was fast!

 

Thanks.

Link to comment
Hello again all, I've been trying to get autosnapshot working as per page 4 of this post, but have had no end of issues.  It may be the new version of unraid I'm not sure.
 
For example, the update_cron command doesn't seem to actually update cron i.e /etc/cron.anything or crontab -e - I didn't know unraid could support custom cron - I thought that's why the user scripts plugin was made.  Manually running the cron script says that the file it's referencing doesn't exist, even though I can see quite clearly that it does exist.  So I suppose that's due to permissions, which are -rw-------  1 root root 16937 Oct 19 12:17 zfs-auto-snapshot.sh - however I can't actually change these permissions as root doing anything not even chmod 777 works - is this normal?  I've not noticed this issue before, but to be honest I haven't played around in the scripts directory before.
 
On the command line I assume the following is acceptable, ID=zfs-auto-snapshot-weekly /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=02 --keep=8 but I get permission denied which aligns with the above.
 
I'm assuming you're going to tell me I need to fix the permissions issue - but thought I'd check in case I'm missing something obvious.
 
I see 6.8 RC3 is already out which was fast!
 
Thanks.

I can build for rc3 tomorrow if there is a new kernel. The I have made a plug-in for ZnapZend which is awesome! I can make that available also :)


Sent from my iPhone using Tapatalk
  • Like 1
Link to comment
2 hours ago, Marshalleq said:
So does that mean I can upgrade unraid to rc3 and ZFS will keep working, or does the same kernel need to be assigned to the RC3 version of zfs so that it activates post upgrade?


Just update, it's the same kernel

Edited by steini84
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.