Sharing ZFS on unraid: getting better performance


Go to solution Solved by Vr2Io,

Recommended Posts

Hullo.. I'm about one year in with unraid on a system I built to run VMs and Docker apps (AMD 3950X, MSI X570, 128 GB RAM, 4 TB SSD array, 2x RTX 2070 SUPERs, GeForce GT 710).  Really enjoying unraid and CA.

 

I needed more storage, so I bought 4x Seagate X18 18TB drives, figuring I'd stripe them with ZFS or RAID.

 

I tested the new drives a bit, first, formatting them ext4 on unraid cli (~280 MB/sec w/1M writes to a single drive, ~same for reads -both via dd with iflag/oflag=direct, clearing unraid's filesystem cache before the read test).  Then I tested with an Ubuntu VM, passing the device in as a secondary drive, with raw and VirtIO, and got 245 MB/sec writes, about the same for reads.

 

I thought, "Okay, that's close enough to spec. Maybe I could run them in a raidz2 pool and find a method for passing through a zfs dataset that's performant."  So I created a raidz2 pool from the four drives. With caching disabled,

dd if=/dev/zero of=./test.dat bs=1M count=40000 oflag=direct

which writes at ~404 MB/s, reads ~496 MB/s.  Not bad!

 

In the same pool I created a 1 TB /dev/zvol block device, then created a Ubuntu 20 VM using the new block device as its primary, /dev/zd0, with VirtIO.

 

Performance was abysmal. Writes ~35 MB/s, reads ~37 MB/s (bs=1M count=40000 oflag=direct).  I tried accessing another dataset on the same zpool, via SMB, from another Ubuntu VM on the unraid host, and got writes at 20 MB/s, reads 18 MB/s (again bs=1M etc).

 

Is there a better way?  (What causes such slowness??)  I know about passing in the /dev/sd* drives to let a guest VM create its own zpool, but I wanted to share the darn thing across VMs and not keep it in just the one VM.

 

I've seen the idea of running TrueNAS in a VM -I imagine it'll be quick for that VM, because it'd be passing the devices through, but wouldn't peer VMs suffer the same lag as my previous efforts?  Is it worth going down that path?

 

What's your best method and result for sharing ZFS across VMs?

Edited by tourist
add spacing between paragraphs
Link to comment
3 hours ago, Vr2Io said:

Or access by passthrogh NIC instead VirtIO/bridge.

 

Pass through a physical NIC from the unraid host to a VM?  That will bring faster transfer from a ZFS dataset on unraid with the VM?

 

Unobvious [to newb me], so why not, I'll try it.

Link to comment

I ran a few more tests, at bigger block sizes, and got reasonable performance.  Bigger blocksize = better performance, right up to the range of what my tiny ZFS array is capable of.  The app for this particular VM is gonna be batch writing big chunks of rows to MariaDB tables, so now I think it'll be fine.  (I know that the bigger MB & GB/s figures are due partly to write caching.  In longer dd tests (500GB) it averaged out to near the native write performance.)

 

I'll continue benchmarking just for the heck of it.

 

ps. I did allocate a 2.5 Gbps ethernet adapter to this VM, but haven't set it up yet.  I take it this would be done using ISCSI, targetcli etc?  I wonder if performance would be any better.

 

$ dd if=/dev/zero of=./test99_bs8K_c500K.dat bs=8K count=500K oflag=direct status=progress
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 86.478 s, 48.5 MB/s

$ dd if=/dev/zero of=./test99_bs16K_c250K.dat bs=16K count=250K oflag=direct status=progress
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 49.7929 s, 84.2 MB/s

$ dd if=/dev/zero of=./test99_bs32K_c125K.dat bs=32K count=125K oflag=direct status=progress
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 32.96 s, 127 MB/s

$ dd if=/dev/zero of=./test99_bs64K_c62.5K.dat bs=64K count=62500 oflag=direct status=progress
4096000000 bytes (4.1 GB, 3.8 GiB) copied, 22.2345 s, 199 MB/s

$ dd if=/dev/zero of=./test99_bs128K_c31.25K.dat bs=128K count=31250 oflag=direct status=progress
4096000000 bytes (4.1 GB, 3.8 GiB) copied, 12.7734 s, 321 MB/s

$ dd if=/dev/zero of=./test99_bs256K_c15625.dat bs=256K count=15625 oflag=direct status=progress
4096000000 bytes (4.1 GB, 3.8 GiB) copied, 10.2312 s, 400 MB/s

$ dd if=/dev/zero of=./test99_bs512K_c7813.dat bs=512K count=7813 oflag=direct status=progress
4096262144 bytes (4.1 GB, 3.8 GiB) copied, 4.02221 s, 1.0 GB/s

$ dd if=/dev/zero of=./test99_bs1M_c3906.dat bs=1M count=3906 oflag=direct status=progress
4095737856 bytes (4.1 GB, 3.8 GiB) copied, 3.28692 s, 1.2 GB/s

 

Edited by tourist
Link to comment
  • Solution
7 hours ago, tourist said:

I take it this would be done using ISCSI, targetcli etc?

 

I never try iSCSI or ZFS at Unraid, just notice some performance issue when access share through virtual network bridge, no matter array pool / RAID pool, passthrough a NIC have lot better performance.

  • Like 1
Link to comment

The coolest thing about ZFS is it can create virtual block devices. What I ended up doing was using the 2nd form of `zfs create`, which creates a zfs dataset at a virtual block device /dev location-

zfs create -s -V 1T -o volblocksize=4096 -o compression=lz4 mypool/vmvol

 

This created `/dev/zd0`.  Then I created a new VM, Ubuntu Server 20, using that /dev/zd0 over SATA.

It performs well, for larger block sizes.  For database writes of batches of large rowsets (20 MB, 100K rows), it's quite fast, ~450 MB/sec over time, as UNRAID's write cache becomes exhausted.

 

When I need to expand the ZFS pool, I can add new drives, once raidz drive expansion hits production.

 

UNRAID has been good to me.  If I built separate machines, 3x more money spent.  If I virtualized in AWS, $3k per month (AWS estimate).  It paid for itself in two months.

  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.