Jump to content

stuoningur

Members
  • Posts

    13
  • Joined

  • Last visited

Posts posted by stuoningur

  1. On 9/19/2022 at 12:27 AM, BVD said:

    One other thing immediately comes to mind - you're mounting you're zpool directly in /mnt, right?

     

    If not, do that and re-test - putting zfs inside the virtual directory unraid uses to merge the disparate filesystems of multiple disks introduces a massive number of unaccounted for variables, and even unraid itself doesnt directly mount physical filesystems directly on top of virtual ones.

     

    Yeah I mounted the zpool directly to /mnt like in the first post of this thread.

  2. On 8/2/2022 at 3:18 AM, BVD said:

    @stuoningur Finally had some time to sit down and type - 

    As I was doing some quick napkin math though thinking about your situation today, some points on the test setup/config:

    • 4 disk raidz1, so 3 'disks worth of IOPs'
    • rough rule of thumb is 100 IOPs per HDD in a raidz config (varies a lot more than that of course
    • the default block size of 128k means 'each IOP is 128k'
    • 128KB * 3 disks * 100 IOPs, equals ~38MB
    • your test was for 256k block size, halving the IO - which results in our ~20MB/s

    Outside of the above:

    • Confirm your zpool has ashift set to 12 zfs dataset. Nearly all newer non-enterprise drives (ironwolf being the SMB/NAS market instead) are 4k sectors (w/ 512b emulation). Huge potential overhead depending on the implementation, and really no downsides to this, so it's win/win. Lots of good background information on this out there for further reading if interested
    • Check your zfs dataset's configuration to ensure it's a one-to-one match for what you're comparing against - Wendell did his tests without case sensitivity, no compression, etc
    • Validate your disk health via SMART, ensuring no UDMA/CRC/reallocated sectors/etc are being encountered (which could easily contribute to hugely reduced performance
    • Ensure the system is completely idle otherwise at the time of the test
    • And finally, validate your hardware against the comparison point - Wendell's system had a 32GB l2arc, so the point about ensuring the file tested is bigger than the l2arc miiiiiiight've been one of those 'do as I say, not as I forgot to do' kind of things (he's a wicked busy dude, small misses happen to us all! However I don't think that's the case here, as ~45-60MB/s per drive for a 4 disk z1 is actually pretty average / not exactly unheard of performance levels)

    Assuming the config 100% matches (or at least 'within reason'), the rest is unfortunately just going to be going through those steps mentioned earlier, ruling out one by one until the culprit's determined.

    Thank you for taking time to reply! I know it has been a few days... I stopped caring about ZFS for a while but gave it today another shot with different configs, trying to incorporate your tips and also did some more research. I also stopped caring about that benchmark, I just used it as it was more or less comparable to my setup. Having said that, I of course "benchmarked" it, but just by samba file transferring and running VMs on it and stuff. Samba maxxed out at about 90MB/s (or about 750Mbit/s) which is still a bit slower than hoped for. Realistically it's fast enough, but still noticeably slower than a normal unraid+cache setup.

     

    Also getting samba to work, I tried the symlink approach, has been quite annoying as often I would get errors about permissions.

     

    Maybe I try truenas after my vacation which should be more guided in setup and see if maybe I still did something wrong somewhere.

  3. 1 hour ago, BVD said:

     

    Whole bunch to go through there, too much to type right now, but a few things to consider:

    * Your test is for 256k IO using the random read/write algorithm, with sync enabled.

    * The default zfs dataset has a 128k block size (half the test block size), so two write actions for each 1 from fio. With sync, you're having to physically finish and validate the write to disk before continuing, not an ideal workload for HDDs anyway.

    * On top of that, weve got a 64 IO depth (which is essentially "how long can my queue be") is essentially halved by the default dataset blocksize; sort of "cancelling it out", down to 32 in effect

     

    The most important part though is this - in order to properly test your storage, the test needs to be representative of the workload. I pretty strongly doubt you'll primarily be doing synchronous random r/w 256k IO across some ~20TB of space, but in the event you do have at least some workload like that, youll just ensure that one dataset on the pool is optimally configure to handle it in order to ensure your results are "the best this hardware can provide".

     

    Also, would be happy to set aside some time with you still of course! As an FYI (just given the time of your response here), I'm in GMT-5, assuming we're basically opposite hours of eachother, but im certain we could make some time thatd work for us both. You just lemme know if/when you'd like to do so.

     

    I'm actually working on some zfs performance documentation geared towards unraid on github currently (going over different containers with recommendations on both how to configure their datasets as well as test+tune, general "databases on zfs" stuff, tunable options from the unraid/hypervisors side and when/how to use them, and so on), and the above post has been enough to kick me in the rear and get back to it. It's been an off and on thing for (months? Hell, idk), but I'll try to share it out as soon as at least *some* portion of it is fully "done". Maybe itll help someone else down the line 👍

     

    Thank you for your reply! Time zones are just a social construct, I'm sure we would manage somehow if needed.

     

    I also had a feeling the test might be weird, but the idea for me at least was: He has this and that performance with that settings, so I should be around there as well.

     

    Wendell has bigger disks (but same amount) and who knows about the rest of the system, but he claims with the same test command ~160MB/s while I was around 20. Of course with the same ZFS settings.

     

    Not sure how to test the performance better and how to compare it.

  4. On 7/30/2022 at 4:32 PM, BVD said:

     

    If you've 30-40 minutes free today, we can do a quick remote session and take a look if youd like? We can probably sort out the cause in 10-15, but buffer never hurt.

     

    Shoot me a DM if youd like, I'll be around off and on throughout the day 👍

     

    Sorry I didn't see the message, but I really appreciate the offer. In the meantime I did some progress already and I think at least one issue is my CPU performance, as it doesn't boost as high as it could. Performance increased a lot with the cpu governor on performance, but still slower than I would expect. I will try around a bit more and would possibly come back to your offer.

     

    Another comment had the valid complain that I didn't provide much information so here it goes:

    It's a Ryzen 7 5700G system with 32GB DDR4

    4x4TB Seagate IronWolfs on a LSI Broadcom 9201-8i HBA

     

    I tried Raid-Z1 and 2x 2 drive mirrors.

     

    In general I basically followed the guide on level1techs forum for the general zpool, dataset and test command.

  5. I finally tried this plugin and set everything up, 3x 4TB hdds with a raidz1 and I did run a command to test the speed from level1techs forum and I have abysmal performance. I get around 20MB/s read and writes, not sure how to troubleshoot this. In the example given in the forum he had around 150MB/s with 4 8TB hdds

     

    The command is:

    fio --direct=1 --name=test --bs=256k --filename=/dumpster/test/whatever.tmp --size=32G --iodepth=64 --readwrite=randrw

     

×
×
  • Create New...