ZFS plugin for unRAID


steini84

Recommended Posts

1 hour ago, BVD said:

 

Whole bunch to go through there, too much to type right now, but a few things to consider:

* Your test is for 256k IO using the random read/write algorithm, with sync enabled.

* The default zfs dataset has a 128k block size (half the test block size), so two write actions for each 1 from fio. With sync, you're having to physically finish and validate the write to disk before continuing, not an ideal workload for HDDs anyway.

* On top of that, weve got a 64 IO depth (which is essentially "how long can my queue be") is essentially halved by the default dataset blocksize; sort of "cancelling it out", down to 32 in effect

 

The most important part though is this - in order to properly test your storage, the test needs to be representative of the workload. I pretty strongly doubt you'll primarily be doing synchronous random r/w 256k IO across some ~20TB of space, but in the event you do have at least some workload like that, youll just ensure that one dataset on the pool is optimally configure to handle it in order to ensure your results are "the best this hardware can provide".

 

Also, would be happy to set aside some time with you still of course! As an FYI (just given the time of your response here), I'm in GMT-5, assuming we're basically opposite hours of eachother, but im certain we could make some time thatd work for us both. You just lemme know if/when you'd like to do so.

 

I'm actually working on some zfs performance documentation geared towards unraid on github currently (going over different containers with recommendations on both how to configure their datasets as well as test+tune, general "databases on zfs" stuff, tunable options from the unraid/hypervisors side and when/how to use them, and so on), and the above post has been enough to kick me in the rear and get back to it. It's been an off and on thing for (months? Hell, idk), but I'll try to share it out as soon as at least *some* portion of it is fully "done". Maybe itll help someone else down the line 👍

 

Thank you for your reply! Time zones are just a social construct, I'm sure we would manage somehow if needed.

 

I also had a feeling the test might be weird, but the idea for me at least was: He has this and that performance with that settings, so I should be around there as well.

 

Wendell has bigger disks (but same amount) and who knows about the rest of the system, but he claims with the same test command ~160MB/s while I was around 20. Of course with the same ZFS settings.

 

Not sure how to test the performance better and how to compare it.

Link to comment
1 hour ago, stuoningur said:

 

Thank you for your reply! Time zones are just a social construct, I'm sure we would manage somehow if needed.

 

I also had a feeling the test might be weird, but the idea for me at least was: He has this and that performance with that settings, so I should be around there as well.

 

Wendell has bigger disks (but same amount) and who knows about the rest of the system, but he claims with the same test command ~160MB/s while I was around 20. Of course with the same ZFS settings.

 

Not sure how to test the performance better and how to compare it.

 

Honestly too much to type - I'll hang loose for a bit while you're scoping this out further and just wait to hear back on whether youd like a second set of eyes on it with ya. Best of luck, and enjoy the journey! 

Link to comment

@stuoningur Finally had some time to sit down and type - 

As I was doing some quick napkin math though thinking about your situation today, some points on the test setup/config:

  • 4 disk raidz1, so 3 'disks worth of IOPs'
  • rough rule of thumb is 100 IOPs per HDD in a raidz config (varies a lot more than that of course
  • the default block size of 128k means 'each IOP is 128k'
  • 128KB * 3 disks * 100 IOPs, equals ~38MB
  • your test was for 256k block size, halving the IO - which results in our ~20MB/s

Outside of the above:

  • Confirm your zpool has ashift set to 12 zfs dataset. Nearly all newer non-enterprise drives (ironwolf being the SMB/NAS market instead) are 4k sectors (w/ 512b emulation). Huge potential overhead depending on the implementation, and really no downsides to this, so it's win/win. Lots of good background information on this out there for further reading if interested
  • Check your zfs dataset's configuration to ensure it's a one-to-one match for what you're comparing against - Wendell did his tests without case sensitivity, no compression, etc
  • Validate your disk health via SMART, ensuring no UDMA/CRC/reallocated sectors/etc are being encountered (which could easily contribute to hugely reduced performance
  • Ensure the system is completely idle otherwise at the time of the test
  • And finally, validate your hardware against the comparison point - Wendell's system had a 32GB l2arc, so the point about ensuring the file tested is bigger than the l2arc miiiiiiight've been one of those 'do as I say, not as I forgot to do' kind of things (he's a wicked busy dude, small misses happen to us all! However I don't think that's the case here, as ~45-60MB/s per drive for a 4 disk z1 is actually pretty average / not exactly unheard of performance levels)

Assuming the config 100% matches (or at least 'within reason'), the rest is unfortunately just going to be going through those steps mentioned earlier, ruling out one by one until the culprit's determined.

Link to comment

Cross-posting here, as I finally made some progress on documenting some ZFS on UnRAID performance related stuff

 

I'm using my vacation week to work on some of this as a passion project - hope it's found to be helpful by some, and just note, it'll continue to grow/evolve as time allows 👍

  • Like 3
Link to comment
On 8/1/2022 at 3:43 AM, gyto6 said:

What do you mean saying "that's happened in a few places already" ? That other functionalities has been annonced but not released, or at least in another upgrade?

Hey, just eluding the various places where Limetech have already eluded ZFS is coming - .e.g the video interview I saw quite a while back now, the poll for next feature which included (a winning) zfs as highest vote etc...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.