ZFS plugin for unRAID


steini84

Recommended Posts

39 minutes ago, jortan said:

The DDT needs to be stored within your pool and constantly updated for every block of data that you write to the pool.  Every write involves more writes to the DDT (either new hashes or references to existing hashes)

Yes and that's one reason (I think) why my "real world" write performance to my test pool (after the ARC is full) degrades the bigger the DDT gets.

Do you know if the DDT will be moved automatically if I add a special dedup vdev now to my existing pool? And back to the pool if I remove this dedup vdev later?

Link to comment
7 hours ago, subivoodoo said:

Yes and that's one reason (I think) why my "real world" write performance to my test pool (after the ARC is full) degrades the bigger the DDT gets.

 

More deduped data also means more compute resources to compare each write to hashes of all the existing data (to see if it can be deduplicated)

 

7 hours ago, subivoodoo said:

Do you know if the DDT will be moved automatically if I add a special dedup vdev now to my existing pool?

 

I don't know for sure, but I suspect not - in the same way that adding a normal vdev does not cause ZFS to redistribute your data or metadata.

 

7 hours ago, subivoodoo said:

And back to the pool if I remove this dedup vdev later?

 

It should by design, but it also might just break (situation may have improved in the last 11 months?)

Link to comment
10 hours ago, subivoodoo said:

Do you know if the DDT will be moved automatically if I add a special dedup vdev now to my existing pool? And back to the pool if I remove this dedup vdev later?

Hey, I went through a few of these tests.  You can add the special vdev at any time but you do have to recopy the data.  A quick way I ended up figuring out how to do that was to first rename a dataset to a new name, the send receive that back to the original name, then delete the renamed set.  There's also a rebalance script I could dig up, but there are caveats to that, so I ended up just doing the rename.

 

I've read, that you can take a special vdev out again in certain circumstances, but to be honest it was not very clear and sounded scary - most people in that discussion concluded it wasn't for them.  Remember the special vdev holds all the information about where the files are stored (I guess it's like ZFS FAT), so if it dies - the array is effectively dead, because it doesn't know how to find the files.  Though again, only files that have been modified since the special vdev has been added are on it, so I'm not sure if the whole pool would die or not.

 

EDIT: From that other thread "Supposedly, special vdevs can be removed from a pool, IF the special and regular vdevs use the same ASHIFT, and if everything is mirrors. So it won’t work if you have raidz vdevs or mixed Ashift."

 

Honestly the best thread I found on it is below, with a fantastic thread of comments and questions at the bottom of it, worth reading if you're considering doing it.

 

The opening paragraph from the article:

Introduction

ZFS Allocation Classes: It isn’t storage tiers or caching, but gosh darn it, you can really REALLY speed up your zfs pool.

 

From the manual:

Special Allocation Class
The allocations in the special class are dedicated to specific block types. By default this includes all metadata, the indirect blocks of user data, and any deduplication tables. The class can also be provisioned to accept small file blocks."

 

Link

https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954

 

Happy reading!

Edited by Marshalleq
Link to comment

Randomly I came across this openzfs man page which lists a device type specifically for storing dedup tables.  I was not aware this device type was available.  So this would be I guess to split it out from the other metadata and small file blocks that come with a special vdev, for those that want to do that - though I suspect that's pretty niche as the special device type should offer additional performance improvements in most cases.  

 

ZFS is awesome. :)

 

 

Link to comment


And now my speed compare ZFS/Dedup/iSCSI to TrueNAS (as VM on Unraid) vs. ZFS/Dedup/iSCSI  to native Unraid... I mean to the awsome community plugins on Unraid 😇 !!! Thanks a lot at this point for all your work.

 

As I hoped the performance is (mostly) better. It could be because of no virtio layer needed (I don't know the performance of the FreeBSD/TrueNAS virtio driver). The "real world" 10GB movie file copy doesn't drop as much and runs really faster on "native unraid".

 

Conclusion:

Will I use iSCSI => yes, the performance for games over 10G network is great, normally just a few seconds compared with local Nvme which is not noticable if you have to watch intos...

Will I use ZFS => yes!

Will I use Dedup => it depends... the dedup ratio could be better for that and the drawbacks of slower performance with more data in the future is a big point. I've another idea to test...

 

Next idea:

Prepare one game lib as zvol with really everything installed, take a snapshot/clone of it for every client. The clones should only use the space on disk for the changed data and no additional RAM needed for dedup table. With such a setup I need to update only the initial/primary "game lib zvol" and a reset/redo new clones via a user-script should be possible.
 

20220116-ZFS-Dedup-iSCSI-toUnraid-SpeedTests.png

20220116-ZFS-Dedup-iSCSI-toUnraid-BigFileCopy.png

Edited by subivoodoo
typos and bad english
  • Thanks 1
Link to comment
7 minutes ago, ich777 said:

Today I released in collaboration with @steini84 a update from the ZFS plugin (v2.0.0) to modernize the plugin and switch from unRAID version detection to Kernel version detection and a general overhaul from the plugin.

 

When you update the plugin from v1.2.2 to v2.0.0 the plugin will delete the "old" package for ZFS and pull down the new ZFS package (about 45MB).

Please wait until the download is finished and the "DONE" button is displayed, please don't click the red "X" button!

After it finishes you can use your Server and ZFS as usual and you don't need to take any further steps like rebooting or anything else.

 

The new version from the plugin also includes the Plugin Update Helper which will download packages for plugins before you reboot when you are upgrading your unRAID version and will notify you when it's safe to reboot:

grafik.png.cb5d8b6b7189de6aad7305bd2d6ec769.png

 

 

The new version from the plugin now will also check on each reboot if there is a newer version for ZFS available, download it and install it (the update check is by default activated).

If you want to disable this feature simply run this command from a unRAID terminal:

sed -i '/check_for_updates=/c\check_for_updates=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

 

If you have disabled this feature already and you want to enable it run this command from a unRAID terminal:

sed -i '/check_for_updates=/c\check_for_updates=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

Please note that this feature needs an active internet connection on boot.

If you run for example AdGuard/PiHole/pfSense/... on unRAID it is very most likely to happen that you have no active internet connection on boot so that the update check will fail and plugin will fall back to install the current available local package from ZFS.

 

 

It is now also possible to install unstable packages from ZFS if unstable packages are available (this is turned off by default).

If you want to enable this feature simply run this command from a unRAID terminal:

sed -i '/unstable_packages=/c\unstable_packages=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

 

If you have enabled this feature already and you want to disable it run this command from a unRAID terminal:

sed -i '/unstable_packages=/c\unstable_packages=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

Please note that this feature also will need a active internet connection on boot like the update check (if there is no unstable package found, the plugin will automatically return this setting to false so that it is disabled to pull unstable packages - unstable packages are generally not recommended).

 

 

Please also keep in mind that for every new unRAID version ZFS has to be compiled.

I would recommend to wait at least two hours after a new version from unRAID is released before upgrading unRAID (Tools -> Update OS -> Update) because of the involved compiling/upload process.

 

Currently the process is fully automated for all plugins who need packages for each individual Kernel version.

 

The Plugin Update Helper will also inform you if a download failed when you upgrade to a newer unRAID version, this is most likely to happen when the compilation isn't finished yet or some error occurred at the compilation.

If you get a error from the Plugin Update Helper I would recommend to create a post here and don't reboot yet.

You have truly taken this plugin to the next level and with the automatic builds it´s as good as it gets until we get native ZFS on Unraid!

  • Like 6
  • Thanks 1
Link to comment

Hi

 

Is it possible that you guys introduced some nasty bugs with that update?
My system is not reacting anymore and everytime when i force it to reboot loop2 starts to hang with 100% cpu usage and docker.img is on my ZFS pool.

 

Started after i updated zfs for unraid to 2.0.0.

Edited by PyCoder
Link to comment
Hi
 
Is it possible that you guys introduced some nasty bugs with that update?
My system is not reacting anymore and everytime when i force it to reboot loop2 starts to hang with 100% cpu usage and docker.img is on my ZFS pool.
 
Started after i updated zfs for unraid to 2.0.0.

You are probably storing docker.img on the zfs pool and running the latest RC of unraid:
32bf98e753ab053fdedf0902b0b3c880.jpg
You can also try to use folders instead of docker.img


Sent from my iPhone using Tapatalk
  • Like 1
  • Thanks 1
Link to comment

If there are any interests in my results of "have fun with ZFS/iSCSI on Unraid" for a shared game library... I've finished my tests and I will NOT use dedup. The real world performance of copying hundreds of GB to my dedup enabled test zvol's via iSCSI (10G network) is horrible and tooks hours... compared to dedup-off which tooks just 12 minutes for 520GB. The syntetic benchmarks are also better without dedup (see attached screenshots).

 

My second idea of "setup all games and clone it" works great... now I need even less storage in my pool as with dedup enabled because of there is just one fully installed game library present, all the others are clones with just a few MB of different files. The performace is better and I don't need much more RAM for the DDT. For the update process I have a little script that removes the iSCSI mapping/backstorage, creates a new snapshot/clone and re-creates the iSCSI mapping/backstorage... so game install or update is simple as:

- do it on the main gaming rig

- run a user-script

- bam, a few seconds later all my kids getting new games 😁

 

=> My next project based on this setup is testing GPU-P and clone another 2-3 game libraries... which is now done in seconds and does not need any additional storage!!!

 

If someone needs the commands to create iSCSI backstorages for zvol's or someting... I can write a little tutorial.

 

Benchmarks with dedup ON vs. OFF over iSCSI (better write performance, max out my 10G network, pool with 2 cheap SATA consumer SSD's stiped, 24GB for ZFS ARC):

20220124_compare_zfs_zvol_over_iscsi_dedup-ON-OFF.png

Edited by subivoodoo
  • Like 2
Link to comment

So I ended up useing the 'hybrid mode' called by @SpaceInvaderOne to create my pool, datasets and zvol's and import/export them to Unraid:

 

Another help for ZFS+iSCSI is this post here:

https://forum.level1techs.com/t/guide-iscsi-target-server-on-linux-with-zfs-for-windows-initiator-clients/174803

 

But creating a zvol (which is more or less a 'disk within a file stored on your ZFS pool') is easy as:

zfs create -s -V 100G -o volblocksize=4096 -o compression=lz4 YOURPOOLNAME/testzvol

-s = sparse, thin provisioning so only used space within the zvol is allocated

-V 100G = 100GB size

 

All the created zvol's are listed under /dev/zvol/YOURPOOLNAME/*** (example above is therefore /dev/zvol/YOURPOOLNAME/testzvol ) and also shown there after reboot. The zvol's are also shown with zfs list and it's possible to create zvol's within other datasets if you need it.  Such a zvol can be used as VM disk by just add a manual entry like this:

grafik.png.2e097c8df71b73d97b9f68a5178e749b.png

 

or in my case use it together with the iSCSI target plugin by @SimonF and @ich777:

 

 

You just need to create the backstorage manually (I think at the moment 😉) with the following commands:

 

targetcli

/backstores/block create name=testzvol dev=/dev/zvol/YOURPOOLNAME/testzvol
cd /backstores/block/testzvol/
set attribute block_size=4096
set attribute emulate_tpu=1
set attribute is_nonrot=1
cd /
exit

 

The rest can be configured within the iSCSI plugin, you can just pick this manually created backstorage there:

 

grafik.thumb.png.b502a36670e5398ccff5174c2fcd45c4.png

 

If you don't need it any longer, remove the initiator mapping and delete the backstorage entry (note the zvol still exists):

 

targetcli

cd /backstores/block/
delete testzvol
cd /
exit

 

And last but not least how to clone an existing zvol and/or delete it:

 

zfs snapshot YOURPOOLNAME/testzvol@yoursnapshotname
zfs clone -p YOURPOOLNAME/testzvol@yoursnapshotname YOURPOOLNAME/testzvol.myclone

zfs destroy YOURPOOLNAME/testzvol.myclone
zfs destroy YOURPOOLNAME/testzvol@yoursnapshotname
zfs destroy YOURPOOLNAME/testzvol

 

Edited by subivoodoo
  • Like 1
  • Thanks 1
Link to comment

I'm trying to mount an existing ZFS drive that I pulled from a freenas setup (not raid). I  setup a ZFS pool pointing at the drive thinking it would mount the contents of the drive under the pool that I just created. Looks more like it replaced the existing pool with a new one that thinks the drive is empty. Any advice on recovering and mounting the drive?

Link to comment
1 hour ago, colbert said:

I'm trying to mount an existing ZFS drive that I pulled from a freenas setup (not raid). I  setup a ZFS pool pointing at the drive thinking it would mount the contents of the drive under the pool that I just created. Looks more like it replaced the existing pool with a new one that thinks the drive is empty. Any advice on recovering and mounting the drive?

 

zfs import is what you wanted here, not zfs create

 

I suggest before you do anything else, you zfs export the pool (or just disconnect the drive) prevent any further writing and consider your options (but I'm not sure if there are any)

Edited by jortan
Link to comment

Updated the plugin to v2.1.0 so that it now a scrub from every ZFS Pool is executed after a unclean shutdown of the system and you will be notified when the check for each individual pool finished and if the pool is healthy or degraded, you will be also notified if you have setup individual scrubs from your pools.

 

If you want to disable this feature execute this from the command line:

sed -i '/unclean_shutdown_scrub=/c\unclean_shutdown_scrub=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

and if you want to enable it if you have already disabled it:

sed -i '/unclean_shutdown_scrub=/c\unclean_shutdown_scrub=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

 

This feature is turned on by default.

 

The notifications will look something like this:

grafik.png.55e82053756528a32fa4460df1f37baf.png.6fa9145accc1cde559b0a2c0d3582f72.png

 

Please ignore if you get a message after updating the plugin that the scrub is finished for your pools.

  • Like 4
  • Thanks 1
Link to comment
22 hours ago, ich777 said:

Updated the plugin to v2.1.0 so that it now a scrub from every ZFS Pool is executed after a unclean shutdown of the system and you will be notified when the check for each individual pool finished and if the pool is healthy or degraded, you will be also notified if you have setup individual scrubs from your pools.

 

If you want to disable this feature execute this from the command line:

sed -i '/unclean_shutdown_scrub=/c\unclean_shutdown_scrub=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

and if you want to enable it if you have already disabled it:

sed -i '/unclean_shutdown_scrub=/c\unclean_shutdown_scrub=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

 

This feature is turned on by default.

 

The notifications will look something like this:

grafik.png.55e82053756528a32fa4460df1f37baf.png.6fa9145accc1cde559b0a2c0d3582f72.png

 

Please ignore if you get a message after updating the plugin that the scrub is finished for your pools.

Why can't I display the dataset information in the webui after I updated this version.

No response when i clicking the button. and no useful information in the webui logs.

image.thumb.png.f1741bb9417b9ab715bb85892c8e7eae.png

image.png.9dfd36b654f1e1cbd83d7988b4601df1.png

But the zpool actually contains not only the dataset but also the snapshot.

Edited by diannao
Link to comment
20 minutes ago, diannao said:

Why can't I display the dataset information in the webui after I updated this version.

I don't think that this update has anything to do with that since nothing changed that would prevent this.

 

20 minutes ago, diannao said:

No response when i clicking the button. and no useful information in the webui logs.

Is this the plugin from @Iker maybe he can help with that.

Link to comment
2 hours ago, diannao said:

Why can't I display the dataset information in the webui after I updated this version.

No response when i clicking the button. and no useful information in the webui logs.

image.thumb.png.f1741bb9417b9ab715bb85892c8e7eae.png

image.png.9dfd36b654f1e1cbd83d7988b4601df1.png

But the zpool actually contains not only the dataset but also the snapshot.

 

That's weird, I already updated zfs to the latest version and never saw that problem; could you please share your ZFS & ZFS Master plugin version?

Link to comment
52 minutes ago, diannao said:

I've certainly tried restarting the server, but it still does not work

This is really strange, since already said above, nothing was changed that would prevent this, the Kernel module and the applications completely the same, there where only a few lines added and ZED runs now in background but that should not harm the functionality from @Iker's plugin.

Link to comment
49 minutes ago, ich777 said:

This is really strange, since already said above, nothing was changed that would prevent this, the Kernel module and the applications completely the same, there where only a few lines added and ZED runs now in background but that should not harm the functionality from @Iker's plugin.

Works ok my test system and shows datasets and correct snapshot count.

Edited by SimonF
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.