ZFS plugin for unRAID


steini84

Recommended Posts

2 hours ago, nathan47 said:

I've now tried resilvering a drive in my pool twice now, and this is the second time I've run into this error:

 

Any ideas? I'd love to finish the resilver, but after this it seems that all i/o to the pool is hung and ultimately leads to services and my server hanging.

 

Jan 10 20:53:59 TaylorPlex kernel: BUG: unable to handle page fault for address: 0000000000003da8
Jan 10 20:53:59 TaylorPlex kernel: #PF: supervisor write access in kernel mode
Jan 10 20:53:59 TaylorPlex kernel: #PF: error_code(0x0002) - not-present page

 

 

Potentially bad memory:

 

 

Link to comment
  • 2 weeks later...

Hello there, 

 

what is the common way to set the app data and docker folder to? 

 

I'm using an array with a single USB stick. 

I created two datasets 

 

zfs/System/appdata

zfs/System/docker

 

EDIT: tried it this way and it leads to a crash/unresponsivness from the GUI. What am I missing here?

 

Is this the normal way to go? 

 

Thanks in advance! 

Edited by orhaN_utanG
Link to comment
On 1/22/2023 at 2:17 PM, orhaN_utanG said:

Hello there, 

 

what is the common way to set the app data and docker folder to? 

 

I'm using an array with a single USB stick. 

I created two datasets 

 

zfs/System/appdata

zfs/System/docker

 

EDIT: tried it this way and it leads to a crash/unresponsivness from the GUI. What am I missing here?

 

Is this the normal way to go? 

 

Thanks in advance! 

 

 

Hi

 

I use as disk1 - any old HDD - make a pool (xfs) and start array (without drive you could not start the array)

now I mount my ZFS pool over disk1   mount -R /zfspool /mnt/disk1   so it shows 16TB in Pool (instead of the old 5TB) AND you can create shares in Gui with using cache !

unraid never knows of the underlying ZFS it works great

you have to install the plugin user script and start the mount -R /zfspool /mnt/disk1  every time the array has startet.
the old drive can go in standby after 1h - it needs not to spin up
I think you can use an usb stick as dummy disk

 

edit: mount -R /zfspool /mnt/disk1

 

-R is for all submounts/Datasets !

Edited by igi
Link to comment
I have been trying to build a custom version of this module from my git fork but have been unsuccessful. I need to do this because I want to import a zpool that was built on TrueNAS Scale with a newer version of ZFS (with head_errlog feature flag). Is there a build script for this that I can adapt for this purpose? Thank you in advance!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Link to comment

There used to be a beta track for this plugin if I recall. There was also the automatic builder community kernel script where you could just point it at the zfs code. Not sure if that’s still around. That thing was awesome but I’m guessing unraid folks didn’t like it much due to perception it’s too hard to support. 

Link to comment
21 minutes ago, Marshalleq said:

There used to be a beta track for this plugin if I recall. There was also the automatic builder community kernel script where you could just point it at the zfs code. Not sure if that’s still around. That thing was awesome but I’m guessing unraid folks didn’t like it much due to perception it’s too hard to support. 

Sadly, the unraid-kernel-helper plugin seems to have been retired. I checked on the github and I did not see any branches for beta. To be fair, the plugin is up to date. Most distros are shipping 2.1.7, I think 2.1.9 just dropped a couple days ago, and the feature flag I need unfortunately requires the latest git, since it hasn't made it into the releases yet. I do not use that feature, but unfortunately my zpool is too big to destroy and there is no way (that I know of) to downgrade an existing zpool to a previous version. 

Link to comment
4 hours ago, theimmortal said:

version of ZFS

@Marshalleq & @BVD I haven‘t built any recent versions because ZFS will be interared directly into Unraid.

 

However I can build it today.

 

@theimmortal I will let you know when the builds are uploaded, the only thing you have to do afterwards is reboot and make sure that you have a active internet connection on boot because the plugin checks and updates the ZFS version if a newer version is found on boot.

  • Like 2
Link to comment

@theimmortal I've now compiled 2.1.8 and 2.1.9 for Unraid 6.11.5 and uploaded it.

Please reboot your Server to install the update, but as said make sure that you have a active internet connection on boot, a second reboot is also necessary if you do it like described above.

 

The more easy way would be to uninstall the ZFS Plugin, reinstall it from the CA App and reboot afterwards.

  • Like 2
Link to comment

@ich777 You are incredible! Not to seem totally ungrateful, but could you show me how you are building the modules? I unfortunately created this pool from git master last year and it includes a feature flag that still has not made it into the official releases. I want to build a one-off so I can import this pool that is still separate from what gets automatically updated to the rest of the community since it would include "experimental" features that could have unknown adverse effects. I am also just interested in learning more about building packages for unraid and this provides a great opportunity for that.

Link to comment
5 hours ago, theimmortal said:

Not to seem totally ungrateful, but could you show me how you are building the modules?

Sure, look at the Git Repository from @steini84 the build script should be there but keep in mind that I build in a highly customized Docker container, but you also can build it on Unraid directly.

Keep in mind ZFS will be integrated in Unraid 6.12.0 anyways and you will run definitely run into issues when using a custom package on 6.12.0

 

5 hours ago, theimmortal said:

I am also just interested in learning more about building packages for unraid and this provides a great opportunity for that.

May I ask in what packages are you interested? You have to compile the Kernel for most packages first and after that you can build modules or whatever you want. Keep also in mind that you have to build a plugin for the package too.

 

If you need anything compiled, as long as there is a need for it, I'm here to help and I can compile almost anything what you want since I have a completely automated build script if a new Unraid version is released regardless if it's stable, next or test.

Link to comment
3 hours ago, ich777 said:

Sure, look at the Git Repository from @steini84 the build script should be there but keep in mind that I build in a highly customized Docker container, but you also can build it on Unraid directly.Keep in mind ZFS will be integrated in Unraid 6.12.0 anyways and you will run definitely run into issues when using a custom package on 6.12.0

Once I get this pool imported, I'll be transferring files to a new pool. It's almost 100TB of so it is going to take a bit of time to move everything off, destroy the old pool, create and then add as a second vdev to the new pool and then rebalanced. My goal is to have that all done before upgrading to 6.12.0. The new pool was created using ZFS 2.1.7, so hopefully it will work fine in 6.12.0.

 

Quote

May I ask in what packages are you interested? You have to compile the Kernel for most packages first and after that you can build modules or whatever you want. Keep also in mind that you have to build a plugin for the package too.

Nothing super critical, just something I wanted to understand how to do it so that if/when I run into a situation where I need a tool that I cannot get in the Community Apps I have multiple avenues. One that I can think of is a plugin to change the default shell to fish and manage the config files.

Link to comment
7 hours ago, theimmortal said:

The new pool was created using ZFS 2.1.7, so hopefully it will work fine in 6.12.0.

If it was bare bones ZFS it will work just fine since most users over here who are using the plugin even don't notice when a new version from ZFS is released because the plugin updates everything on it's own.

 

7 hours ago, theimmortal said:

Nothing super critical, just something I wanted to understand how to do it so that if/when I run into a situation where I need a tool that I cannot get in the Community Apps I have multiple avenues.

I would do a request post for such software somewhere on the forums since maintaining such things can be pretty tedious...

 

7 hours ago, theimmortal said:

One that I can think of is a plugin to change the default shell to fish

I think for fish there is somewhere a plugin package out there for Slackware but anyways I have a self maintaining repository for un-get where you can get install Slackware packages too where I can add fish if it is from interest.

 

But please keep in mind that Unraid is not a General Purpose Server, it's a Application Server and everything should be run in a Docker, VM or LXC Container...

 

7 hours ago, theimmortal said:

manage the config files.

With the Dynamix File Manager you can already do that which is available in the CA App:

grafik.png.7c42c87893c444f680efee8cca5668b5.png

Link to comment
On 1/30/2023 at 4:11 AM, BVD said:

Pretty sure its @ich777tthat maintains the build automation for it, unless I misread at some point?

You are absolutly correct. He took my manual build process and automated it so well that I have not had to think about it at all any more! Really took this plugin to another level and now we just wait for the next Unraid release so we can depricate it :)

  • Like 1
  • Thanks 2
Link to comment

Hi all, wondering what I may have done wrong here.

 

I've setup a pool and moved data from the array to the pool but noticed most of the folders are empty.

 

I ran rsync -avh /source /destination and it took about 36 hours to move 15TB

 

Once the transfer had completed, I took a snapshot before renaming the source folder from data to data_old with "mv /mnt/user/data /mnt/user/data_old"

 

I then edited the mountpoint for the pool with "zfs set mountpoint=/mnt/chimera chimera" and symlinked /mnt/user/data with /mnt/chimera/data

 

I saw the free space for the share in unRAID GUI reflected the available space but, after checking the share via SMB, most folders are empty. Confirmed this was the case in CLI as well.

 

I don't think I can rollback either as the "refer" is in the pool, not the data dataset. When copying another folder, it seemed to write everything back and not just restore or refer from snapshot.

 

Don't really want to transfer everything all over again, is there anything I can do?

 

zfs list
NAME           USED  AVAIL     REFER  MOUNTPOINT
chimera       15.2T  28.2T     14.9T  /mnt/chimera
chimera/data   312G  28.2T      312G  /mnt/chimera/data

zfs list -t snapshot
NAME                  USED  AVAIL     REFER  MOUNTPOINT
chimera@manual          0B      -     14.9T  -
chimera/data@manual   719K      -      164G  -

 

Link to comment

Ooh, I found it.

 

Was reading up more on where snapshots are stored and was able to navigate to /mnt/chimera/.zfs/snapshot/manual/data and everything's there.

 

It's read-only though so a standard move is taking just as long as a copy from the array. Anything else I can try?

 

I suspect a zfs send and zfs recv will suffer from the same bottleneck.

 

EDIT: Nevermind. It's not an elegant solution since I copied to the root and not a child dataset. Not sure how I managed that but I'll just copy everything again.

 

Edited by Akshunhiro
Link to comment
  • 2 weeks later...

So i rebuild my ZFS array and learned a bit more to become more savvy in ZFS terms.

 

now i rock an 7x2TB el cheapo sata SSD Array, its purpose is to serve any VM and any PC in my Household its own shadow game library. Thanks to @subivoodoo & @ich777 i got the puzzle together and it works like charm, next step ist unterstanding how to reverse engineer his script :D 

 

what i did 

 

1)make an zfs array, how you like it. myne is named zfs

2)go to the Terminal and create an Dataset with this command

 

3)zfs create -s -V SizeYouWantG -o volblocksize=blocksizeYouWant zfs/nameOfYourDataset

example: zfs create -s -V 8192G -o volblocksize=4096 zfs/games

 

4)than you create an ZFS Blockdevice, still in the Terminal enter

 

5)targetcli

 

6)   /backstores/block create name=NameOfYourBlockdevice   dev=/dev/zvol/NameOfZFSPool/NameOfYourBlockdevice

example: /backstores/block create name=games dev=/dev/zvol/zfs/games

7)  cd /backstores/block/NameOfYourBlockdevice/

8 )set attribute block_size=4096
9) set attribute emulate_tpu=1
10) set attribute is_nonrot=1

 11) cd /

 12) exit

 

 

13) Now you can close the Terminal and go to the ISCSI Plugin and create an ISCSI connection to your Rig.

 14) Install all your Games on that drive

 

 

15) whan all games are installed we make an snapshot in the Terminal with the following commands 

 

16) zfs snapshot NameOfYourZFSPool/NameOfYourBlockdevice@HowYouWantToNameYourSnapshot

example: zfs snapshot zfs/games@allgames

 

than we want to clone that 

 

zfs clone -p zfs/games@allgames zfs/games.myclone

 

then again into 

 

targetcli

/backstores/block create name=games.myclone dev=/dev/zvol/zfs/games.myclone
cd /backstores/block/games.myclone/
set attribute block_size=4096
set attribute emulate_tpu=1
set attribute is_nonrot=1
cd /
exit

 

 

and now you can mount that "cloned Blockdevice via ISCSI" dont forget to create an individual Target per VM/Baremetal/Rig :D

 

 

when you have new games installed than just go to the Terminal 

 

targetcli

cd /backstores/block/

delete NameOfYourBlockdevice.myclone

cd /

exit

 

zfs destroy YOURPOOLNAME/NameOfYourBlockdevice.myclone

zfs destroy YOURPOOLNAME/NameOfYourBlockdevice@HowYouWantToNameYourSnapshot

 

than start over  @ 16) rinse and repeat

 

 

next scenario, you bought another 2TB of SSD storage just add it to your Array as usual than enter

 

zfs set volsize=newSizeofPoolG NameOfYourPool/NameOfYourDataset

example: zfs set volsize=13000G zfs/games

 

now you can add your new storage to your iscsi storage in Windows :D

 

thx @subivoodoo @ich777

 


 

 

datasets.png

iscsi.png

Edited by domrockt
  • Like 2
Link to comment

Hey Guys, how do I fix this? happened after the server went down due to sudden power loss.

root@UnRAID:~# zpool status
  pool: citadel
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 05:25:00 with 0 errors on Fri Feb 17 21:23:57 2023
config:

        NAME                                          STATE     READ WRITE CKSUM
        citadel                                       ONLINE       0     0     0
          raidz1-0                                    ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZGY98G8E           ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZM40MTRB           ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZM40VDNZ           ONLINE       0     0    77
            ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1XK022N  ONLINE       0     0     0

 

Link to comment
1 minute ago, Xxharry said:

Hey Guys, how do I fix this? happened after the server went down due to sudden power loss.

root@UnRAID:~# zpool status
  pool: citadel
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 05:25:00 with 0 errors on Fri Feb 17 21:23:57 2023
config:

        NAME                                          STATE     READ WRITE CKSUM
        citadel                                       ONLINE       0     0     0
          raidz1-0                                    ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZGY98G8E           ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZM40MTRB           ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZM40VDNZ           ONLINE       0     0    77
            ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1XK022N  ONLINE       0     0     0

 

Check the SMART info relative to the disk. If everything is alright, run "zpool clear" and another scrub on the "citadel" pool.

 

If no more error occurs, continue to use your pool safely.

Link to comment
4 minutes ago, gyto6 said:

Check the SMART info relative to the disk. If everything is alright, run "zpool clear" and another scrub on the "citadel" pool.

 

If no more error occurs, continue to use your pool safely.

This is from the SMART test. does that yellow line mean the disk is buggered?11445831_Screenshot2023-02-18at8_13_39pm.thumb.png.384e19eb38f3593e473bc4034ec6183f.png

 

Link to comment
5 hours ago, Xxharry said:

This is from the SMART test. does that yellow line mean the disk is buggered?

 

 

Depends on your risk profile.  Your disk may fail soon, on the other hand I've had disks continue to operate for tens of thousands of hours with a handful of "reported uncorrect" errors.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.