Cache pool best practices


Recommended Posts

So the goal: To create a cache pool that has the best read/write speeds to be able to run as a game drive or other high demand function. 

What has been done so far: I first started off with a cache drive with 4 1TB SSDs. I put them into a RAID10 with btrfs. I have not made any other changes to the default settings. I then added two more 4TB SSDs. So total of 6 drives. It currently shows that i have the 6TB i am expecting. I created a share to reside on the Cache drive. Over the last couple days i have been able to move about a TB of data to it and its decent.

The first observation of an issue?: Earlier today i started to copy about another 3TBs to the Unraid. This is data that will reside on the spinning drives when done. With that, i would expect the cache drive to get to about 4TBs full still giving me about 2 TBs of space. The thing is it got about 1.5 TB into the copy and then said the drive was full. Unraid showed i still had more then 2 TB of space available? I started mover to clear out the cache and i can move files to it again.

The second observation of an issue?: I have a 10GB network between my systems. I am currently coping another TB of data over but i am hovering in the low 70 MB/s of transfer speed coming from a Windows 10 machine. 

Questions:

1.) Is 70MBs a good transfer rate on a 10GB network going to a RAID 10 btrfs cache drive?

2.) Are there performance gains by adding more drives in the RAID10 on unraid like "normal" raid10?

3.) Is there something with uneven drive size in a raid10 when it comes to reporting space?

4.) Are there any other settings i should tune in order to get the best speed

 

Server: PowerEdge R620

Unraid 6.8.2

 

Please share the thoughts.

Thanks

 

 

 

 

Link to comment

You may need to balance your btrfs pool. Read the FAQ which johnnie mentioned how to do it. That's one cause of spurious out of space error.

 

Another possibility is that a RAID 10 with 4x1TB and 2x4TB to give 6TB available space is just not a viable option. Doing RAID with uneven drive sizes is generally not a good idea. Traditional RAID doesn't let you do that.

 

I think your pursue of max speed with (presumably SATA SSD) is heavily misguided.

What "high demand function" are you doing?

A game drive does not at all offer any perceivable improvement going from SATA to NVMe (or as I have tested, even SATA to Optane 905p!)

Link to comment
9 hours ago, FantomDew said:

What has been done so far: I first started off with a cache drive with 4 1TB SSDs. I put them into a RAID10 with btrfs. I have not made any other changes to the default settings. I then added two more 4TB SSDs. So total of 6 drives. It currently shows that i have the 6TB i am expecting.

In raid10 only 3TB will be usable with that pool, see here:

https://carfax.org.uk/btrfs-usage/

Link to comment

OP, your story doesn't make sense. There's no way to have a 6TB RAID10 cache pool with 4x1TB and 2x4TB drives. Most you would get is 3TB usable with 6TB of wasted/unusable space. I suspect what happened here is that when you added the two new 2x4TB SSD drives, the cache pool reverted back to RAID1.

 

As per your questions:

1 - No. 70MB/s is 560Mbps. That speed is not even saturating a single 1Gbps link.

2 - Generally yes. Scaling out RAID10 means that the data can be striped and mirrored across more drives. However, RAID10 has a very high cost overhead. Any more than about 8 drives and you might be better off using RAID5 or 6. Unless money and chassis space is no object.
 

3 - With RAID10 the drive sizes should be equal and drive count always even. If you have mixed drives then btrfs will partition only the usable space that is equal to the lowest size drive. Excess space beyond the size of the lowest drive is unusable. So mixing 4TB drives with 1TB drives means you're losing 3TBs of space on the 4TBs because btrfs will only use 1TB.

 

4 - Not really. Consider the other aspects of your entire network and find the next bottleneck.

 

 

On 3/3/2020 at 1:56 AM, testdasi said:

Doing RAID with uneven drive sizes is generally not a good idea. Traditional RAID doesn't let you do that.

^ Btrfs is designed to operate with uneven disk sizes. There is no harm besides not being able to use the full capacity of the drives in some scenarios.

 

On 3/3/2020 at 1:56 AM, testdasi said:

A game drive does not at all offer any perceivable improvement going from SATA to NVMe (or as I have tested, even SATA to Optane 905p!)

^ Umm.... that's a bit of a misguided statement. There is a major difference between the bandwidth speeds. PCIe 3.0 x4 NVMe is 3.94 GB/s vs SATA's 600MB/s. Just because the game or application is not capable of using the bandwidth does not mean there will be no difference in performance. Especially if transferring between drives, etc. Playing games you generally wont notice a huge improvement besides faster loading. Most of the time the games assets are just loaded from disk into V-RAM on your video card so the disk isn't a major factor anyway. However with video game sizes only getting larger the benefits of NVMe will be realized soon enough.

Edited by Eased
Link to comment

Thanks everyone. The drive is meant to be fast read and write but obviously probably not as fast as NVMe. I was looking for a centralized place to run my games and maybe some VMs off of in the future. With so many parts (client network, main network, client drive, Unraid drive system) i am trying to find some good troubleshooting steps and what the outcome should be. For example if i use iperf, what should i be getting. Like is there a chart that says if your using these kind of things here is the ball park you should be in?

 

As for the drive size, i figured since Unraid could use all the space of all drive in its main pool that if figured out a way to do that in its cache pool. If not perhaps someone should tell Unraid? Because its showing that the cache drive is 6TB?

 

Thanks for the help

Cached drive.PNG

Link to comment
21 minutes ago, FantomDew said:

As for the drive size, i figured since Unraid could use all the space of all drive in its main pool that if figured out a way to do that in its cache pool. If not perhaps someone should tell Unraid? Because its showing that the cache drive is 6TB?

Cached drive.PNG

Might want to double check the cache pool is in the RAID level that you want. I suspect it reverted to RAID1 when you added the new cache drives. Selecting the first cache device will show the btrfs raid level in the "btrfs filesystem df" output.

 

Link to comment
10 hours ago, Eased said:

I suspect it reverted to RAID1 when you added the new cache drives.

For some time now Unraid keeps the current profile when adding new devices, it won't revert back to raid1.

 

Pool is correctly configured for raid10, but like mentioned the usable space reported on the GUI will be wrong, this is a btrfs known issue when using different size devices.

Link to comment
  • 2 years later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.