Jump to content

Very slow cache write


Recommended Posts

Previously I had 2 SATA SSD's in BTRFS RAID 1 as my cache and performance was very slow so I decided to add 2 more SSDs and change to BTRFS RAID 5, however write speeds still seem to be abysmal.

 

To confirm that I didn't just have unrealistic expectations I tried writing an identical folder to first my Unraid cache and then a Windows 10 PC with a singular budget NVME SSD.

 

It took 65 second to write the folder to the Unraid Raid 5 cache and 37 seconds to write the folder to the Windows 10 PC with the lone budget NVME SSD.

 

Looking at the transfers themselves Windows reports that the part of the transfer dealing with thousands of smaller files the Unraid cache slows down to a grinding halt with transfer speeds of just a few hundred kilbytes per second, while the cheapo NVME SSD manages to stay above at least 1 MB/s at all times.

1689572527_writingtoBTRFSRAID5cache.png.6518886f42168f45267edb04983ff199.png1368848288_writingtocheapNVMESSDonW10.png.90560e9e229a68100522e85530072ec3.png

 

I know Windows transfer window is just a guesstimate but giving the total transfer time it seems to be accurate.

 

Surely 4 SATA SSD's working together in RAID 5 can't be H A L F the speed of a cheap, low capacity NVME SSD? Surely...?

vault-diagnostics-20200725-2338.zip

Edited by Redspeed93
Link to comment

Small files are not ideal to test speed, is speed normal with large files? You can also try the new beta, it aligns SSDs on the 1MiB boundary (requires reformatting the pool) and that usually results in better performance.

 

Example of a transfer of 3 large files totaling about 15GB, destinations is a raid5 pool with 5 cheap 120GB SSDs:

 

980707407_Screenshot2020-07-2217_44_37.png.7466097a3bdec51104f00a968f3d1b00.png

 

 

Link to comment
4 hours ago, johnnie.black said:

Small files are not ideal to test speed, is speed normal with large files? You can also try the new beta, it aligns SSDs on the 1MiB boundary (requires reformatting the pool) and that usually results in better performance.

 

Example of a transfer of 3 large files totaling about 15GB, destinations is a raid5 pool with 5 cheap 120GB SSDs:

 

980707407_Screenshot2020-07-2217_44_37.png.7466097a3bdec51104f00a968f3d1b00.png

 

 

I would disagree completely. If the point of the cache was simply to receive large files I might as well not have a cache or just use HDDs as cache. The entire point of having a cache and using SSDs, at least for me, is to avoid the slow write speed that I would otherwise encounter with HDDs when writing many smaller files.

 

While I don't have any data I also feel like the write speeds now, after having added 2 more SSDs to the cache, is basically identical to what it was previously when only 2 SSDs were used in RAID 1, so that adds further to my belief that there is either a configuration error or a bug.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...