Jump to content

BTRFS cache pool using 4 SSD drives over 10G LAN


Recommended Posts

Hello community! Please share your wisdom, again! :)

I really need some advice on making the optimal cache pool. I have a 10G LAN between my Unraid and my Mac workstation (huge thanks to SpaceinvaderOne for amazing videos!) Now I would like to capitalize on my new 10G connection speed.

 

I have 4 SSD SATA drives available:

  • 2 x  Samsung 870 QVA, 2TB
  • 2 x Samsung 850 EVO 1TB.

 

I placed all 4 into a cash pool (obviously btrf) as RAID0. Then I created a share exclusively for that cache pool (with "use cache pool" set to "only") and mounted that share on my Mac.

Now, when I'm writing/reading from that share on my Mac via 10G, I'm only getting around 350-400mb/s, which is, to my mind, is a single drive performance, while I have 4 iof them in RAID0! I'm not expecting 4x speed since drives are of different sizes, but 2x is something I was hoping to see.

It is also worth noting that connecting to "domains" share (which is, in my case, a cache pool of 2 NVMEs) I'm getting a read/write speed of about 750-800 mb/s, which is fine with me. 

 

 

After this long introduction, my question is about btrfs RAID 0 implementation - should I put all 4 into a single raid0, or this is in fact counterproductive? If so, how should I setup my 4 available SSD drives to make sure my Mac could use them at full 10G speed?

 

Many thanks!

 

Link to comment
19 hours ago, JorgeB said:

870 QVO is QLC, so very slow writes after the small SLC cache is used, you'd likely get much better results with 4 EVO drives.

Thanks so much! You are right, that might be the problem. I'll try to replace QVOs and see if any difference.

Link to comment
  • 2 weeks later...

OK, a quick update. I replaced QVO with EVO, now having a cache pool of 4 x Samsung 850 EVO 1TB, in RAID0. Clearly an overkill for Unraid.

With my 10G network the best file transfer speed I'm getting is 600-650 on writes, and up to 700 on reads for large single files (like pro-res videos) and around 350-400 for package libraries (which are literally folders with few thousands of small files inside).

 

Based on the post below I assume this is maximum I could get without setting up a direct access to cache drive. Bypassing the share is rather sophisticated and perhaps doesn't worth the risk in my case. Besides, as my main client is M1 Mac with Big Sur (well known for very weird SMB performance), I assume I should be happy with my latest results.

 

Link to comment
50 minutes ago, mgutt said:

Ececute this and repeat an upload test:

sysctl vm.dirty_ratio=50

 

Does it boost your upload speed for several seconds?

 

Thanks! I have already changed it to 40 before, and now changed to 70 (I have 64 Gb of RAM on server with just a couple of dockers running). Copying a single file from Unraid to client is now solid 1 - 1.1 Gbps, writing the same file back to Unraid is still around 550-600 mbps.

 

However, with some bundle files (such as fcpbundle packages) both writes and reads are quite puzzling. I made a quick screen capture to illustrate the case. Please note that write and read are sequential, nothing is running on background, no network or server load.

In short - copying to client starts fast, but stalls at around 95%, Writing back to unraid starts very slow, but later speeds up.

Is it an anomaly or something to be expected?

Link to comment
  • 2 weeks later...
On 3/26/2021 at 2:37 AM, temkins said:

Is it an anomaly

It is. Never seen such a huge drop.

 

Maybe the "fcpbundle" files are inside of your RAM and finally hitting the drive causes the huge drop?!

 

Please reduce the RAM cache to 100 MB (do not totally disable it) and empty the cache:

sysctl vm.dirty_bytes=100000000
sync; echo 1 > /proc/sys/vm/drop_caches

 

If you repeat your test now, RAM is not used anymore. Is Up- and Download permanently slow, now?

 

And these files are located on your NVMe pool? So you see only transfer activity on your NVMe (in the disk overview)?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...