ZFS plugin for unRAID


steini84

Recommended Posts

Hello, I'm new here, I try to creat raidz pool, but unsuccessful ,the following is what I executed

 

root@Tower:/mnt/disk2# lsblk 
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0  11.5M  1 loop /lib/modules
loop1    7:1    0  19.8M  1 loop /lib/firmware
loop2    7:2    0    20G  0 loop /var/lib/docker
loop3    7:3    0     1G  0 loop /etc/libvirt
sda      8:0    1  14.3G  0 disk 
└─sda1   8:1    1  14.3G  0 part /boot
sdb      8:16   0 931.5G  0 disk 
└─sdb1   8:17   0 931.5G  0 part 
sdc      8:32   0   2.7T  0 disk 
└─sdc1   8:33   0   2.7T  0 part 
sdd      8:48   0   2.7T  0 disk 
└─sdd1   8:49   0   2.7T  0 part 
sde      8:64   0   2.7T  0 disk 
└─sde1   8:65   0   2.7T  0 part 
sdg      8:96   0   3.6T  0 disk 
└─sdg1   8:97   0   3.6T  0 part 
sdj      8:144  0   7.3T  0 disk 
└─sdj1   8:145  0   7.3T  0 part 
sdk      8:160  0   7.3T  0 disk 
└─sdk1   8:161  0   7.3T  0 part 
sdl      8:176  0   7.3T  0 disk 
└─sdl1   8:177  0   7.3T  0 part 
sdm      8:192  0   2.7T  0 disk 
└─sdm1   8:193  0   2.7T  0 part 
sdn      8:208  0 931.5G  0 disk 
└─sdn1   8:209  0 931.5G  0 part 
sdo      8:224  0   2.7T  0 disk 
└─sdo1   8:225  0   2.7T  0 part 
sdp      8:240  0   2.7T  0 disk 
└─sdp1   8:241  0   2.7T  0 part 
md1      9:1    0   7.3T  0 md   /mnt/disk1
md2      9:2    0   7.3T  0 md   /mnt/disk2
md3      9:3    0   3.6T  0 md   /mnt/disk3
md4      9:4    0   3.6T  0 md   /mnt/disk4
md5      9:5    0   3.6T  0 md   /mnt/disk5
md6      9:6    0   3.6T  0 md   /mnt/disk6
md7      9:7    0   2.7T  0 md   /mnt/disk7
md8      9:8    0   2.7T  0 md   /mnt/disk8
md9      9:9    0   2.7T  0 md   /mnt/disk9
md10     9:10   0   2.7T  0 md   /mnt/disk10
md11     9:11   0   2.7T  0 md   /mnt/disk11
md12     9:12   0   2.7T  0 md   /mnt/disk12
md13     9:13   0   2.7T  0 md   /mnt/disk13
md14     9:14   0   2.7T  0 md   /mnt/disk14
md15     9:15   0 931.5G  0 md   /mnt/disk15
md16     9:16   0 931.5G  0 md   /mnt/disk16
sr0     11:0    1    17M  0 rom  
sdq     65:0    0   2.7T  0 disk 
└─sdq1  65:1    0   2.7T  0 part 
sdr     65:16   0   2.7T  0 disk 
└─sdr1  65:17   0   2.7T  0 part 
sds     65:32   0 931.5G  0 disk 
└─sds1  65:33   0 931.5G  0 part 
sdt     65:48   0   3.6T  0 disk 
└─sdt1  65:49   0   3.6T  0 part 
sdu     65:64   0   3.6T  0 disk 
└─sdu1  65:65   0   3.6T  0 part 
sdv     65:80   0   3.6T  0 disk 
└─sdv1  65:81   0   3.6T  0 part 
root@Tower:/mnt/disk2# zpool create -m / mnt / 4TBPOOL sdg sdt sdu sdv
cannot use '/': must be a block device or regular file
root@Tower:/mnt/disk2# 

Link to comment

  

8 hours ago, tr0910 said:

I have a 2 disk ZFS being used for VM's on one server.  These are older 3tb Seagates.  And one showing 178 pending and 178 uncorrectable sectors.  UnRaid parity check usually finds these errors are spurious and resets everthing to zero.  Is there anything similar to do with ZFS?

 

Unless I'm misunderstanding something, I have a disk with Reported Uncorrect sectors in my main unRAID array and unRAID does not appear to reset these during parity checks?

 

This information can be useful to determine whether this was a one-off event or whether the disk is continuing to degrade.
 

 9 Power_On_Hours          -O--CK   056   056   000    -    38599

Error 144 [3] occurred at disk power-on lifetime: 20538 hours (855 days + 18 hours)

 

In my case, for bulk storage of data I don't care too much about, this is fine.

Link to comment
17 minutes ago, tf0083 said:


root@Tower:/mnt/disk2# zpool create -m / mnt / 4TBPOOL sdg sdt sdu sdv
cannot use '/': must be a block device or regular file
root@Tower:/mnt/disk2# 

 

Are you trying to create a RAIDZ1?

 

I'd recommend lower case to make your life easier.  There were some spaces where there shouldn't be and you forgot to include the name of the pool (in addition to the mount point).  I think this is what you want?

 

zpool create -o ashift=12 -m /mnt/4tbpool 4tbpool raidz /dev/sdg /dev/sdt /dev/sdu /dev/sdv
zfs set atime=off 4tbpool
zfs set xattr=sa 4tbpool

 

edit: raidz not raidz1

Edited by ConnectivIT
Link to comment
22 minutes ago, ConnectivIT said:

Unless I'm misunderstanding something, I have a disk with Reported Uncorrect sectors in my main unRAID array and unRAID does not appear to reset these during parity checks?

 

This information can be useful to determine whether this was a one-off event or whether the disk is continuing to degrade.

If I understand you right, you are suggesting that I just monitor the error and don't worry about it.  As long as it doesn't deteriorate it's no problem.  Yes, this is one approach.  However if these errors are spurious and not real, reseting them to zero is also ok.  I take it there is no unRaid parity check equivalent for ZFS?  (In my case, the disk with these problems is generating phantom errors.  The parity check just confirms that there is no errors)

Link to comment
41 minutes ago, tr0910 said:

If I understand you right, you are suggesting that I just monitor the error and don't worry about it.  As long as it doesn't deteriorate it's no problem.  Yes, this is one approach.  However if these errors are spurious and not real, reseting them to zero is also ok. 

 

They are definitely not spurious.  But the disk will have already remapped any problem sectors to working parts of the drive.  "reported uncorrect" is a serious error though, and I suspect it means you may potentially already have corrupted data.  That's independent of anything resetting counters to zero.

 

41 minutes ago, tr0910 said:

I take it there is no unRaid parity check equivalent for ZFS?  (In my case, the disk with these problems is generating phantom errors.  The parity check just confirms that there is no errors)

 

Not entirely equivalent.  A zfs scrub does not check the health of the entire disk, but it will check the integrity all of the data.  I recommend adding this to "user scripts" and running it once per month or so:

 

#!/bin/bash
zpool scrub yourpoolname

 

Then run this to get the status/results of the scrub:

 

zpool status

 

ZFS is far better than other high-availability implementations (like RAID, or unRAID arrays) in regards to data integrity.  If your disks disagree about what the correct data is, ZFS is able to use checksums to determine which data is correct and which drive is "telling lies"

 

edit:  Even better, you could have read errors/corruption on both drives in a ZFS mirror and still keep your array functional by replacing with working disks (without removing the existing ones) - as long as ZFS can read each record on one disk or the other.  With RAID, a failing disk is going to be kicked out of the array entirely and if your other disk in the mirror is failing, you're going to have a bad time.

 

In the case of unRAID array, every time you recheck your parity you are potentially overwriting the parity drives with incorrect information if any one of your drives are damaged.

 

This is a good ZFS primer that covers this and more:

 

 

 

Edited by ConnectivIT
Link to comment
59 minutes ago, ConnectivIT said:

I have a disk with Reported Uncorrect sectors in my main unRAID array and unRAID does not appear to reset these during parity checks?

SMART attributes are recorded by the drive firmware and can't be reset. You can acknowledge the current value by clicking on the SMART warning on the Dashboard page and it will warn again if it increases.

  • Like 1
Link to comment
1 hour ago, Marshalleq said:

I'll never use BTRFS again unless it comes out that they've admitted and fixed whatever it is that keeps making it fail.

I never got a single problem with BTRFS in the default RAID1 configuration, but please keep in mind that everything beyond RAID5 is experimental in BTRFS terms.

Link to comment
@ich777@steini84@Joly0 Both my systems seem to still be running well on latest unraid and latest ZFS. Both docker.img set as xfs and residing on ZFS SSD Mirror.  I am nervous as to why or what changed and wondering what's different between mine and Joly0 - nevertheless it seems like we have a couple of different scenarios we should be able to work it out from.

Why havent you moved yet to hosting docker in a folder on zfs instead of hosting a docker image on top of zfs ?
I moved to it last week and works fine and dont have to worrie about a docker.img file anymore.
Its also more transparent as you can just browse the content of all images etc etc.
  • Like 1
Link to comment
I never got a single problem with BTRFS in the default RAID1 configuration, but please keep in mind that everything beyond RAID5 is experimental in BTRFS terms.

I had soooooooo many problems with hosting VM images on the btrfs pool. Several corruptions and often unrepairable. And with a VM image a corruption is pretty bad as you have to rebuild the whole VM if you are unlucky .
Beeing smart and using snapshots did not help me as eventualy one of my main VM's was dead (when i had to rebuild the whole pool as unrecoverable broken)
Moved to ZFS and never a single issue since . Host can crash like hell and often did sometimes more then once per day in the time i was fighting AMD GPU reset bugs) but the ZFS pools laugh at me about it and continue as if nothing happened (just telling me not to worrie i repaired evrything for you).
So not going back ever. Only use it for cache atm as need a proper raided cache which requires btrfs, but the day it doesnt i am gone from btrfs.
Stung once, twice, thrice byebye
Link to comment
13 minutes ago, glennv said:

Why havent you moved yet to hosting docker in a folder on zfs instead of hosting a docker image on top of zfs ?
I moved to it last week and works fine and dont have to worrie about a docker.img file anymore.
Its also more transparent as you can just browse the content of all images etc etc.

 

I tried this not long ago and had issues installing a specific docker josh5/lancache-bundle

 

This was separate to the issues this docker has with storing its proxied data on ZFS, though possibly caused by the same problem (lack of sendfile syscall on ZFS)

Link to comment
46 minutes ago, glennv said:

I had soooooooo many problems with hosting VM images on the btrfs pool.

As said had never problems and can say the same about BTRFS as you said about ZFS... ;)

 

Back in the times I also hosted about 4 VM's on the BTRFS pool and they where all running fine but now I switched completely to Docker containers and only got one VM running and this VM has a whole NVMe drive passed through for building my Docker containers.

Link to comment

I think a large factor on how your experiences on VM's are shaped is the usage of the VM's. If you have 24x7 heavy active production VM's running vfx renders, code compiles etc and your btrfs/zfs systems crashes , then you will see more of the "recovery power" of the filesystem and also more easily find its flaws.

My issues where repeatable and unfortunately or fortunately happened in a time where i had lots of total systems crashes due to gpu issues. So this tested the skills of boths filesystems to the limits.

Under these same cicomstances hosted on the same ssd's on the same OS version , withe everything else the same, btrfs failed more then once, zfs passed all tests.

That is just my experience , but of course every system / setup is different and can lead to different results. Even version of btrfs/zfs/unraid etc can have a large efect on the results.

In he end we all stick with what we trust 

 

 

  • Like 2
Link to comment
47 minutes ago, OneMeanRabbit said:

Read through this topic many times, and THANK you for your amazing work!  QQ - how did you deploy this?  Docker?

Awesome, great to hear

 

Here are the relevant parts from the docker setup, but take note that I have not updated to check_mk 2.0 - a good weekend project for me :)


checkmk/check-mk-raw:1.6.0-latest
https://hub.docker.com/r/checkmk/check-mk-raw
https://checkmk.com/application/files/2715/9834/3872/checkmk_icon_neg_v2.png
http://[IP]:[PORT:5000]/cmk/check_mk/
--ulimit nofile=1024 --tmpfs /opt/omd/sites/cmk/tmp:uid=1000,gid=1000

 

Screenshot 2021-03-26 at 17.36.32.png

Edited by steini84
  • Like 1
Link to comment
14 hours ago, ConnectivIT said:

 

您是否要创建RAIDZ1?

 

我建议使用小写字母,以使您的生活更轻松。有些地方不应该有空格,您忘记了包括池名称(除了挂载点)。我想这就是你想要的吗?

 


 

编辑:raidz不是raidz1

Successfully created pool,Thanks!

Link to comment
11 hours ago, glennv said:


Why havent you moved yet to hosting docker in a folder on zfs instead of hosting a docker image on top of zfs ?
I moved to it last week and works fine and dont have to worrie about a docker.img file anymore.
Its also more transparent as you can just browse the content of all images etc etc.

I heard a rumour, last time I tried it didn't work because Unraid needed some kind of ZFS drive compiled into docker or something - I'm not really sure why everything has started working - so I'm nervous to change more.  Perhaps Unraid guys included basic ZFS libraries in the new version of unraid or something.

Link to comment
10 hours ago, ich777 said:

As said had never problems and can say the same about BTRFS as you said about ZFS... ;)

 

Back in the times I also hosted about 4 VM's on the BTRFS pool and they where all running fine but now I switched completely to Docker containers and only got one VM running and this VM has a whole NVMe drive passed through for building my Docker containers.

So no redundancy for your dockers then?

Link to comment
10 hours ago, glennv said:

I think a large factor on how your experiences on VM's are shaped is the usage of the VM's. If you have 24x7 heavy active production VM's running vfx renders, code compiles etc and your btrfs/zfs systems crashes , then you will see more of the "recovery power" of the filesystem and also more easily find its flaws.

My issues where repeatable and unfortunately or fortunately happened in a time where i had lots of total systems crashes due to gpu issues. So this tested the skills of boths filesystems to the limits.

Under these same cicomstances hosted on the same ssd's on the same OS version , withe everything else the same, btrfs failed more then once, zfs passed all tests.

That is just my experience , but of course every system / setup is different and can lead to different results. Even version of btrfs/zfs/unraid etc can have a large efect on the results.

In he end we all stick with what we trust 

 

 

It's pretty hard to beat ZFS when it had over a billion dollars spent on it and has been around for 30 years.  Compare that to BTRFS being open source and a little over 10 years.  But it's the denial of any issue on BTRFS that concerns me, you're never going to get a good file system if the project is going to deny there's anything wrong all the time or worse ask why you would even want these things fixed!

 

Oh and thanks for the podcast @ConnectivIT good to have actual examples explained for everyone. It's been a year since then, so hopefully one of the devs heard it and did something about it.  Personally I'd like to see BTRFS removed from Unraid and replaced with ZFS.  Or at least the option! 

Link to comment
23 minutes ago, Marshalleq said:

All you gotta do is search these forums and you shall be rewarded with many examples :)

As I said, I never had a single problem... And I will stick to BTRFS since I never had a single problem wirth it. ;)

 

12 minutes ago, Marshalleq said:

So no redundancy for your dockers then?

No, this is a VM that only builds the containers and uploads them to Dockerhub so I can build very quickly and very much containers at the same time... :)

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.