Soon™️ 6.12 Series


starbetrayer

Recommended Posts

I get frequent ready access on all of my array disks what prevent the disks to spin down :-(
I've used File Activities and Open Files to investigate, but no access to array have been listed.
Strange is, that even if I see reads in kb always with the same rate, toggling to the read amount still shows 0.

Does anybody have an idea, how to identify what is causing this? 

Bildschirm­foto 2023-03-20 um 19.41.05.png

rechenknecht-diagnostics-20230320-1929.zip

Link to comment
On 3/18/2023 at 1:26 AM, JorgeB said:

Do you still see the green screen with the driver blacklisted? If so it could be a compatibility issue with the new kernel, try rc2 when available but the kernel will be very similar, or even the same.

no more green screen & its now bootable. I don't know why, but RC2 works!!! Thanks! 🥰

Screen Shot 2023-03-20 at 4.11.43 PM.png

Edited by winglam
  • Like 1
Link to comment

Couple question about ZFS.  I see that it can be used in a standard unraid array mixed with zfs or btrfs drives.  Does ZFS still use checksums if used in this manner?  If so is your data better protected against corruption?  Has Unraid stated if raidz will support expansion in the future?  Going forward once the bugs are worked out and a stable release has come out will ZFS be the preferred filesystem in standard Unraid arrays?

Link to comment
50 minutes ago, Gragorg said:

Does ZFS still use checksums if used in this manner?  If so is your data better protected against corruption?

Same as btrfs, it checksums the data but since array devices are single device filesystems it can detect corruption but it cannot fix it, you'd need to restore from a backup, but at least would know if there is corruption and which files need to be restored.

 

52 minutes ago, Gragorg said:

Has Unraid stated if raidz will support expansion in the future?

Once it's supported by openzfs it will also be supported by Unraid (this is for pools only, not array)

 

53 minutes ago, Gragorg said:

Going forward once the bugs are worked out and a stable release has come out will ZFS be the preferred filesystem in standard Unraid arrays?

That's basically up to the user, most will be fine with xfs, if you care about any of the zfs/btrfs features like checksums, snapshots, send/receive use it.

  • Thanks 1
  • Upvote 1
Link to comment
52 minutes ago, Gragorg said:

Couple question about ZFS.  I see that it can be used in a standard unraid array mixed with zfs or btrfs drives.  Does ZFS still use checksums if used in this manner?  If so is your data better protected against corruption?  Has Unraid stated if raidz will support expansion in the future?  Going forward once the bugs are worked out and a stable release has come out will ZFS be the preferred filesystem in standard Unraid arrays?

You wouldn't mix ZFS and btrfs drives in the same array. You'd have separate arrays using one or the other. And, yes, ZFS is supposed to do a better job against corruption. As far as future ZFS expansion, that's been worked by the ZFS team for some time now. Will it be the preferred filesystem? I'm guessing no. The fundamental attraction of unRAID is the mixing and matching of drives along with easy expansion. ZFS doesn't provide that. 

Link to comment
2 hours ago, ririzarry said:

You wouldn't mix ZFS and btrfs drives in the same array.

In the context of Unraid, the parity array can indeed have zfs and btrfs and xfs all as single devices in the array, each drive in the array has an independent filesystem.

Each pool in Unraid only has a single file system type, but you could have a zfs pool, a btrfs pool, and a single device xfs pool.

  • Like 3
  • Upvote 2
Link to comment

TWO QUESTIONS:

 

1. If I convert my XFS array to ZFS, does it make sense to still use the File Integrity Plugin? Or is it overkill since checksums are already verified with ZFS?

 

2. In order to use the raid features of ZFS, does that require me to create a separate array with a separate parity drive? Or can it be utilized under a parity drive for an already existing XFS array?

Link to comment
5 minutes ago, sunbear said:

1. If I convert my XFS array to ZFS, does it make sense to still use the File Integrity Plugin? Or is it overkill since checksums are already verified with ZFS?

IMHO overkill.

 

5 minutes ago, sunbear said:

2. In order to use the raid features of ZFS, does that require me to create a separate array with a separate parity drive? Or can it be utilized under a parity drive for an already existing XFS array?

It requires you to create a separate pool (not array) with multiple members, can be a mirror, raidz1, raidz2, etc.

  • Like 1
Link to comment
8 minutes ago, JorgeB said:

IMHO overkill.

 

It requires you to create a separate pool (not array) with multiple members, can be a mirror, raidz1, raidz2, etc.

Awesome, thank you.

 

Would you say there is any difference in the feature set between ZFS protection and File Integrity's protection (blake3)? Or do they both just provide notification of corruption and that's it?

 

Lately, the File Integrity plugin has been very processor intensive when running checks, so I'm wondering if ZFS may be better. Am I correct in assuming ZFS has no "scanning" process and the detection is done automatically? Or is it like the btrfs check which is quite quick?

 

Last thing, so if I have a mirrored or raidz2 pool is it possible to ALSO have it protected under the array parity drive, or is it just like another cache pool?

Link to comment
2 minutes ago, sunbear said:

Would you say there is any difference in the feature set between ZFS protection and File Integrity's protection (blake3)? Or do they both just provide notification of corruption and that's it?

zfs does it on a block level while the plugin does it on a file level, but both provide similar functionality, performance wise zfs is much better since data is hashed automatically during write, so no hash creating after the files are transferred like the plugin needs to do, and to do that it needs to read back all the new files.

 

5 minutes ago, sunbear said:

Am I correct in assuming ZFS has no "scanning" process and the detection is done automatically? Or is it like the btrfs check which is quite quick?

 

Both btrfs and zfs check the data every-time the file is read, and on error they will generate an i/o error, so that the user knows there's a problem and is not unknowingly fed corrupt data, you can also run a scrub at any time to check the complete filesystem for data corruption.

 

10 minutes ago, sunbear said:

Last thing, so if I have a mirrored or raidz2 pool is it possible to ALSO have it protected under the array parity drive, or is it just like another cache pool?

For that it always needs to be a new pool, for the array all assigned data devices must be a single device filesystem.

 

 

 

  • Like 1
Link to comment
43 minutes ago, JorgeB said:

zfs does it on a block level while the plugin does it on a file level, but both provide similar functionality, performance wise zfs is much better since data is hashed automatically during write, so no hash creating after the files are transferred like the plugin needs to do, and to do that it needs to read back all the new files.

 

 

Both btrfs and zfs check the data every-time the file is read, and on error they will generate an i/o error, so that the user knows there's a problem and is not unknowingly fed corrupt data, you can also run a scrub at any time to check the complete filesystem for data corruption.

 

For that it always needs to be a new pool, for the array all assigned data devices must be a single device filesystem.

 

 

 

 

So if a user is adding multiple identical drives, can I assume that it will generally always make more sense to add a raid-protected pool rather than adding individual drives to the parity-protected array?

 

Thanks so much for the responses, btw. These are super helpful!

Link to comment
On 3/24/2023 at 7:12 PM, sunbear said:

So if a user is adding multiple identical drives, can I assume that it will generally always make more sense to add a raid-protected pool rather than adding individual drives to the parity-protected array?

There are advantages and disadvantages for both:

 

array good:

  • even if you lose more disks than parity can emulate the data on the remaining good disks can still be read
  • any filesystem corruption will only affect that particular disk
  • you can fully utilize disks of different capacities and if you upgrade just one disk its capacity can be fully used.
  • you can add/remove a single disk or more at any time
  • spin up only the disk being read, or parity and the disk being written to (when using default write mode)

 

array bad:

  • performance
  • no zfs self healing

 

Pools are basically the opposite.

 

pool good:

  • performance
  • zfs self healing

 

pool bad:

  • if you lose more disks than the pool redundancy can recover from the complete pool is gone
  • if the pool filesystem gets corrupted you can lose the whole pool, and while this is rare with zfs it can happen
  • you can use disks of different capacity in raidz but it will only use the capacity of the smallest one, only when all pool disks are upgraded to a larger same size can the pool be expanded
  • you cannot expand a raidz pool with a single disk (at least not for now), you can add another vdev of the same width, say for example you have a 4 disk raidz1 pool, you can add another 4 disks in raidz1 to expand it
  • you cannot remove one or more disks (including a complete vdev) from a raidz pool, mirrors are more flexible but I would guess most are interested in using raidz with zfs
  • any read/write to the pool will spin up all disks

 

 

I'm sure I forgot some but these should be the main ones.

  • Like 2
  • Thanks 6
  • Upvote 2
Link to comment
  • 3 weeks later...
On 3/24/2023 at 8:31 PM, JorgeB said:

pool bad:

  • if you lose more disks than the pool redundancy can recover from the complete pool is gone
  • if the pool filesystem gets corrupted you can lose the whole pool, and while this is rare with zfs it can happen
  • you can use disks of different capacity in raidz but it will only use the capacity of the smallest one, only when all pool disks are upgraded to a larger same size can the pool be expanded
  • you cannot expand a raidz pool with a single disk (at least not for now), you can add another vdev of the same width, say for example you have a 4 disk raidz1 pool, you can add another 4 disks in raidz1 to expand it
  • you cannot remove one or more disks (including a complete vdev) from a raidz pool, mirrors are more flexible but I would guess most are interested in using raidz with zfs

 

Arent thoose true for Btrfs Raid in a Pool?

Link to comment
44 minutes ago, jammsen said:

 

Arent thoose true for Btrfs Raid in a Pool?

Only some of them:

  • If a pool contains different size disks BTRFS will try and use all the available space (RAID level permitting).
  • You can add new drives to the pool at any time and their space will be utilised (RAID level permitting).
  • You can remove drives from the pool as long as the pool is redundant.

A point that I do not think ZFS supports (could be wrong about this) but BTRFS does is that you can dynamically switch between RAID levels.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.