Jump to content

Cache Pooling...Is there no remedy for slow speeds?


JimPhreak

Recommended Posts

I know there have been noted issues with regard to the write speed of the BTRFS files system that is used with cache pools on unRAID.  I experienced these issues first hand and then decided to try another route which was to use a 2-to-1 SSD/RAID enclosure that presents a RAID1 mirror to unRAID as a single device.  I figured if I could do that and put the XFS file system on I'd have better speeds.

 

Well...no dice.  I still have the same speed issues (rarely if ever eclipsing 50MB/s).  It's making me seriously question why I even have a cache drive/pool.

 

Is there no way to get close to 1Gbps cache pool writing in unRAID 6 yet?

Link to comment

This concerns me.

 

I plan to move from a ESXi environment using an Adaptec 6805e RAID controller. The RAID is configured with 2 x RAID groups (1 JBOD and the other RAID 0) all SSDs. In VMFS/Windows 10 i get around 200-300MB/s write speeds, not brilliant but not bad nonetheless.

 

Can someone please outline the optimum configuration in UNRAID based on the following?

 

Adaptec 6805e RAID controller and 4 x 1TB SSD (Samsung EVO)

 

Should I ditch the adaptec controller, connect the SSDs directly to the Intel SATA ports on the motherboard and add the drives to UNRAID? What would be the best approach?

 

Cheers!

 

 

Link to comment

I just helped someone with a 10gbps setup and with 8TB HDDs in the cache, we were able to see speeds > 900MB/s.

 

SSDs in 6.1.5 have been configured to use a more optimal IO scheduler, so.I encourage you to upgrade to 6.1.5 to see if you have a performance gain.

 

Also, the more pairs of drives you have in your btrfs pool, the faster it will go.  So 4 drives has twice the IO potential of 2.  That's how btrfs rais1 works.

Link to comment

I just helped someone with a 10gbps setup and with 8TB HDDs in the cache, we were able to see speeds > 900MB/s.

 

SSDs in 6.1.5 have been configured to use a more optimal IO scheduler, so.I encourage you to upgrade to 6.1.5 to see if you have a performance gain.

 

Also, the more pairs of drives you have in your btrfs pool, the faster it will go.  So 4 drives has twice the IO potential of 2.  That's how btrfs rais1 works.

I should also note those speeds have sustained constantly for a 60TB copy operation.

Link to comment

I just helped someone with a 10gbps setup and with 8TB HDDs in the cache, we were able to see speeds > 900MB/s.

 

SSDs in 6.1.5 have been configured to use a more optimal IO scheduler, so.I encourage you to upgrade to 6.1.5 to see if you have a performance gain.

 

Also, the more pairs of drives you have in your btrfs pool, the faster it will go.  So 4 drives has twice the IO potential of 2.  That's how btrfs rais1 works.

I should also note those speeds have sustained constantly for a 60TB copy operation.

 

Interesting. What is the optimum configuration for UNRAID when using the following?

I would like to know whether there is any benefit in using the cache and/or array when using all SSDs.

 

1.) 4 x 1TB Samsung EVO SSD connected to Intel SATA 6gbps on motherboard

2.) 1 x Samsung NVME M.2 connected to m.2 slot on motherboard

Link to comment

SSDs belong in a btrfs cache pool imho.  NVMe devices are not yet supported as they require more changes to our code. They aren't like traditional SSDs at all.

 

Try that configuration in 6.1.5.  If its still performing slow, report back.

 

If all SSDs in cache pool do I need to assign some drives to an array aswell?

I also thought NVME drives would still be accessible and can be assigned in UNRAID? I just want to put some VMs on this. This is not possible?

Link to comment

UnRAID requires at least 1 device be assigned to the array to be able to start it.  If you have no other devices then you will have to wait for 6.2 when we add support for USB devices in the array, so you could just add a basic/cheap USB stick to disk1 to satisfy unRAID's requirement.  Supporting the start of the array without an actual array is not a simple tweak, so while that may happen in the future, its a little ways out yet from being prioritized.

 

Why would you have thought NVMe devices to be assignable in unRAID exactly?  No, it isn't possible today.  There are other posts on this topic in the forum where this was discussed.

 

NVMe will be supported eventually, but because of the low # of users with these devices and the fact that we will need to obtain testing equipment to build in support for it, this too is not a short-term item for us.

 

I tried helping someone get their NVMe device passed through completely to a VM and it would never let the guest OS actually install to it.

 

As I said, NVMe is special and requires special coding on our side to support, so you will have to wait for that support.

 

Link to comment

I don’t use cache pool at the moment but recently configured for a friend cache with 2 x Kingston V300 SSD 120GB and with v6.1.4 got 420MB/s+ sustained writes with a 25GB test file, didn’t yet test 6.1.5 but believe it’s already close to max write speed.

 

what drives were in the array? I have 4 x Samsung 1TB EVOs. Do i use one for the cache and 3 for the array?

Link to comment

I don’t use cache pool at the moment but recently configured for a friend cache with 2 x Kingston V300 SSD 120GB and with v6.1.4 got 420MB/s+ sustained writes with a 25GB test file, didn’t yet test 6.1.5 but believe it’s already close to max write speed.

 

what drives were in the array? I have 4 x Samsung 1TB EVOs. Do i use one for the cache and 3 for the array?

 

The array is HDDs only, both SSDs are on the cache pool.

Link to comment

UnRAID requires at least 1 device be assigned to the array to be able to start it.  If you have no other devices then you will have to wait for 6.2 when we add support for USB devices in the array, so you could just add a basic/cheap USB stick to disk1 to satisfy unRAID's requirement.  Supporting the start of the array without an actual array is not a simple tweak, so while that may happen in the future, its a little ways out yet from being prioritized.

 

Why would you have thought NVMe devices to be assignable in unRAID exactly?  No, it isn't possible today.  There are other posts on this topic in the forum where this was discussed.

 

NVMe will be supported eventually, but because of the low # of users with these devices and the fact that we will need to obtain testing equipment to build in support for it, this too is not a short-term item for us.

 

I tried helping someone get their NVMe device passed through completely to a VM and it would never let the guest OS actually install to it.

 

As I said, NVMe is special and requires special coding on our side to support, so you will have to wait for that support.

 

Thanks for that explanation. It seems at this stage I am not ready for UNRAID especially since I planned to moved from ESXi 6.

Shame though, the real benefit was being able to use geforce cards in VMs with UNRAID, i hate having to mod my cards to quadro k5200 or use AMD to use in vmware.

Link to comment

 

 

UnRAID requires at least 1 device be assigned to the array to be able to start it.  If you have no other devices then you will have to wait for 6.2 when we add support for USB devices in the array, so you could just add a basic/cheap USB stick to disk1 to satisfy unRAID's requirement.  Supporting the start of the array without an actual array is not a simple tweak, so while that may happen in the future, its a little ways out yet from being prioritized.

 

Why would you have thought NVMe devices to be assignable in unRAID exactly?  No, it isn't possible today.  There are other posts on this topic in the forum where this was discussed.

 

NVMe will be supported eventually, but because of the low # of users with these devices and the fact that we will need to obtain testing equipment to build in support for it, this too is not a short-term item for us.

 

I tried helping someone get their NVMe device passed through completely to a VM and it would never let the guest OS actually install to it.

 

As I said, NVMe is special and requires special coding on our side to support, so you will have to wait for that support.

 

Thanks for that explanation. It seems at this stage I am not ready for UNRAID especially since I planned to moved from ESXi 6.

Shame though, the real benefit was being able to use geforce cards in VMs with UNRAID, i hate having to mod my cards to quadro k5200 or use AMD to use in vmware.

 

It'll get there eventually.  Your just too far ahead trailblazing with the NVMe stuff. Once there are more than a handful of people buying and using them, it'll make more sense for us to spend the money on test equipment and time coding support for it.

Link to comment

I have managed to get my NVMe drive formatted and mounted to unraid via command line to the point of booting a vm. This was mounted as a normal mount device with a vm image on it.

 

Whilst the vm started, it locked up the entire unraid instance the second windows started to load. It may be an issue with how I was mounting it but it may also have been an issue as Jon stated where it needs some further changes or configs.

 

There are some of us working on this, and within the unraid community the number of people using nvme is limited. I know a lot of people using them now. I think right now people aren't buying them as unraid doesn't support them.

 

I could see a large influx of users if unraid supported the drives. Especially for mounting vm images due to the massive speeds.

 

Regards,

Jamie

Link to comment
  • 2 weeks later...

Ok so just re-configured my cache pool this weekend to a 3x480GB Intel 730 SSD pool.  My usual transfers speed didn't increase at all, still hovering around 30-40MB/s tops.  However, I did come to a realization...

 

95% of the time I am transferring files to a cache only share (Downloads).  Any time I transfer to this share the transfer speed is very poor (30-40MB/s).  However if I do a transfer to a non-cache only share such as Videos (which I rarely if ever do) then my transfers pretty much max out my 1Gbps connection (113MB/s).

 

So my question is, what is it about copying to a cache only share that would be causing such a significant drop?

Link to comment

Does it matter what size your cache drive is ?, for example could i buy a cheap 120GB SSD just to speed up the write speed to the server ?

 

No it doesn't matter. Just as long as it's big enough to hold enough data before the next mover schedule. Or big enough to hold dockers and vms if you want to have those on it as well.

Link to comment
  • 5 weeks later...

How on earth can a 4-disk (all Intel 730 SSD's) cache pool not be able to write beyond 40MB/s?  No matter what type of file I try to transfer or what share (or directly to the cache) I copy to, the transfer will start out saturating 1Gbps and then after about 10-20 seconds it stays somewhere between 30-40MB/s for the remainder of the transfer.

 

Something is not adding up.  With this hardware the speed's should not be this slow.

 

EDIT:  Any chance the bottleneck is my M1015 controller?  I have 12 drives hooked up to it.

Link to comment

So I converted my BTRFS cache pool from RAID1 to RAID10 and while I did see some speed improvements, it's still not where I'd like it to be. 

 

With the pool on RAID1 I tried to transfer 130GB comprised of 46 different video files.  After about 15 seconds of saturating my gigabit connection the speed dipped to between 30-40MB/s for the remainder of the transfer.

 

After converting to RAID10, I get peaks and valleys.  The transfer speed will stay above 100MB/s for 2-3 files and then dip down to 30-40MB/s for 2-3 files and then back up and then back down again.  I know this issue is a cache write issue because when I copy the exact same files back from the cache pool the speed is 112+MB/s for the entire transfer.

 

I don't know what else to do or if this is just the way it is when transferring multiple large files.

Link to comment

I can sustain gigabit writing to a RAID1 with a couple of Samsungs 840 Evo SSDs.

 

That really frustrates me.  Especially given the fact that I'm using four "prosumer" SDDs in the Intel 730's.  I don't know what to try from here.

 

I'd try to test if your controller is the bottleneck... but honestly I don't know how you would do that. I'm sure someone can come up with something.

Link to comment

I can sustain gigabit writing to a RAID1 with a couple of Samsungs 840 Evo SSDs.

 

That really frustrates me.  Especially given the fact that I'm using four "prosumer" SDDs in the Intel 730's.  I don't know what to try from here.

I remember someone seeing a big improvement changing controller, from hba to onboard or the other way around.

 

Also make sure you trim your pool regularly.

Link to comment

I can sustain gigabit writing to a RAID1 with a couple of Samsungs 840 Evo SSDs.

 

That really frustrates me.  Especially given the fact that I'm using four "prosumer" SDDs in the Intel 730's.  I don't know what to try from here.

I remember someone seeing a big improvement changing controller, from hba to onboard or the other way around.

 

Also make sure you trim your pool regularly.

 

I trim my pool regularly but I have no option to move to onboard as I have 12 drives and 6 on board SATA ports.  I guess I could try updating the firmware on my M1015 but the last thing I want to do is brick the controller as having my array down for extended periods of time is a no no.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...