Jump to content

Not getting desired speeds over ZFS 10g connection - possible I/O bottleneck


Recommended Posts

Hey all,

 

I'm recently new to the world of Unraid so apologies if my terminology is a bit fresh.

I've setup an Unraid server to act as an 'ingest' before transferring large file data to LTO tapes.

 

The current setup is as follows:

Data on the Unraid server -> 10g network switch -> mac mini (with 10g support) -> LTO tape drive

 

When I transfer a single 50gb file to and from the Unraid server, I get solid 800MB/s speeds. However when I transfer a folder with lots of files in it, it crawls at between 80-150MB/s.

 

I currently have a raid0 setup with 3 drives in ZFS as I read that this was the only configuration to get the full stripe speeds across multiple drives (otherwise you'd only cap yourself at the speed of 1 drive). I also managed to add a 1tb NVME cache to the ZFS raid, which has helped quite a bit (gone from 50-70MB/s to 150ish).

 

I've been reading up on I/O and how smaller file sizes can impact the speed drastically, but am wondering if there is anything I can do to speed up the transfer of small file sizes. Will adding more drives to the raid or an additional cache help? Anything in the network settings?

 

Link to comment

Small files are always a killer, whatever you do or add.

This is because a "transaction" (file) needs to be totally written before the LAN acknowledges and accepts the next file.

And what people always tend to forget is that "totally" includes updates to directories (which usually generates head movements).

This actions take the same amount for all files, but of course, on a large file they are less visible and annoying.

But all these lists need to be in sync and flushed before it can continue.

 

Link to comment

Thanks for the reply! Is there anything that can be done to speed up the process of this? I.e. will adding more drives in a stripe help with the I/O? Just theoretically having 4 compared to 40? Or using an SSD raid instead? Or is it more so limited in the software and network? 

Link to comment

More striped drives would likely be worse. More spinners improve peak transfer rate but nothing can improve seek times and that's typically the bottleneck on small files, with more drives you then have to wait for all of them to have seeked which may be longer than waiting for a smaller number of them. 

 

SSDs should be the solution. There will stil be some overhead from the network/protocols but that should be minor in comparison.

Edited by Kilrah
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...