Jump to content

SSD Cache


greatkingrat

Recommended Posts

Looking to build an unRaid, but concerned over performance.  10-15MB over 10/100/1000 with  802.1AX seems like a waste of a good solid network.

 

I was wondering if getting a SSD as the cache drive would get me the performance OF the ssd.

 

For instance, if i get a nice SSD (size large then any file i will transfer) that the equivilant of a raid 5 with 8 drives (what my other option is) then would my performance, for the most part, be equil?

 

Building on that to the point of it being not worth the money, but still fun to think about:

 

if I hardware raided a few ssd's as raid 0 and put that as the cache, would that be even FASTER?

 

 

 

Final off topic question - will unraid allow me to share all the space as one big drive? (off topic but not worth another post hehe)

Link to comment

Your bottleneck will be the network so it depends on how many links you're aggregating - and if you're doing so in a way unraid allows.

 

Getting an SSD as the cache drive will get you the performance of the SSD although the very nature of the cache drive means that the data thats cached will have to be moved onto the main array at some point. So you are always going to ultimately see 10-15MB writes, just postponed from your interactive usage.

 

I'm not sure what you mean by your other option being a raid 5 with 8 drives? Is that instead of unraid? The two approach data storage and protection very differently and, in my opinion, aren't really equivalent. If you just mean a raid 5 array of 8 drives as the unraid cache drive that's probably overkill.

 

If you hardware raided some ssd's as raid 0 and your link aggregation, cards and tcp/ip config supported it (note i have no idea on unraid support) then you would see the fast throughput. But as above ultimately the cache would be flushed to the main array at the speed originally mentioned. Your data won't be protected until that point (although its probably reasonably safe on SSD anyway).

 

If you are only connecting your unraid server to your network with a gigabit port then an SSD will largely be fairly pointless as a cache drive over a cheaper normal sata hard disk for large sequential writes (i.e media files). A half decent SATA drive will push gigabit quite hard on a single write and SSD won't be leaps and bounds faster. Of course if your writing pattern is lots of smaller files or lots of users generating write i/o at the same time, then the SSD will give you better random performance.

 

The benefits of SSD as a cache drive (on a gigabit port) is form factor, power consumption / heat, noise and the fact that they are (probably..time will tell) more reliable than spinning disk which is useful when staging to the main protected array. Form factor also allows you to not necessairly have to use a drive bay or hot swap bay and simply secure it to the case as you see fit.

 

Money being no object I would certainly use an SSD over a normal drive for the reasons above.

 

For your final question - in a fashion. It will merge all your individual disks into one large virtual disk which you can present to clients. Look up user shares in the manual / wiki / forum.

 

This isn't quite the same as a true single disk but it works well enough. There is a slight performance overhead for doing this.

Link to comment

A 1.5TB seagate can be read and written at about 120-125MB/s on the outer tracks.

If your network can push data faster then 125MB/s then an SSD might be worth while.

 

I don't think a gigabit network can push data that fast.

 

Where an SSD might be worth while is not having to spin a platter when you want to drop off a file.

It could be immediately dropped off and moved to the array at leisure.

 

In my case, this would be a big help many times I will go to download a file and drop it off in an unRAID folder, only to end up waiting for the drive to spin up!

 

 

I found a pretty neat mount by scythe called the slotrafter that lets to attach up to 4 2.5" drives to a PCI style bracket!

http://www.scythe-usa.com/product/acc/064/slotrafter_detail.html

slotrafter-main_203.jpg

Link to comment

That's exactly the information I needed, thank you.

 

I'll have to test it out with just one connection first, then go up and see if it supports it.

 

As for the SSD, I'll do the above first since there is no promise it's supported, and if it's not, it's overkill.  there will only be three, maybe four users connected at any given time.

The raid 5 with 8 drives was what i was debateing with.  it'd be more expensive (I'd have to purchase all the same drives, rather than uses extras lying around) but faster access.  at this point i've firmly decided on unRAID, at least for now.

 

 

Good to hear that you can display it as one disk.  I realize it's not like raiding them into one, but that suits my needs.

 

 

 

Link to comment

A 1.5TB seagate can be read and written at about 120-125MB/s on the outer tracks.

If your network can push data faster then 125MB/s then an SSD might be worth while.

 

I don't think a gigabit network can push data that fast.

 

The difference in latencies between HDDs and SSDs is so big that I would test before dismissing any potential benefits. Bandwidth is, of course, the biggest factor but timing issues could have non-trivial effects.

 

When testing, make sure you also test the transfer rate from the cache drive to the unRAID array ...

Link to comment

An SSD as cache can make a difference, but it will depend on a number of factors as to mow much of a difference.  But you need an SSD on BOTH ends, as even if it is on the receiving end, and your network can handle the data rate, the rate of the drive you are READING from will then limit you.  (and which SSD you pick will make a HUGE difference... many of them have worse performance than a even a low-end hard drive)

 

So before you spend big bucks on an SSD, put some extra RAM in unRAID and set up a ramdisk... and set up a ramdisk on your desktop (I recommend VSuite Ramdisk (Free Edition))... then copy from ramdisk to ramdisk over the network. (a ramdisk is more than 10 times faster then even the best SSDs on the market)

 

They try copying from your desktop hard drive to the ramdisk on unRAID.

 

You will likely see a significant difference if you are on a good Gigabit network.

 

For comparison, I have an Ocz Vertex that tests out at:

   Sequential Read :  178.605 MB/s
  Sequential Write :   93.124 MB/s
Random Read 512KB :  149.647 MB/s
Random Write 512KB :   91.534 MB/s
   Random Read 4KB :   25.848 MB/s
  Random Write 4KB :    8.839 MB/s

 

And an Mtron SSD in another system:

 

   Sequential Read :   30.633 MB/s
  Sequential Write :   28.340 MB/s
Random Read 512KB :   30.526 MB/s
Random Write 512KB :   29.401 MB/s
   Random Read 4KB :    6.447 MB/s
  Random Write 4KB :    3.516 MB/s

 

But a VSuite Ramdisk leaves everything in its dust:

 

   Sequential Read : 3701.884 MB/s
  Sequential Write : 2917.564 MB/s
Random Read 512KB : 3582.635 MB/s
Random Write 512KB : 2855.466 MB/s
   Random Read 4KB :  749.280 MB/s
  Random Write 4KB :  625.422 MB/s

Link to comment
  • 2 weeks later...

I am thinking of installing the cache drive to increase the upload transfers.

I don't need a cache disk any larger than 50-100GB

 

Thinking about it now, I can only transfer as fast as my 7200 laptop drive

 

That being said, should I purchase a WD RE3 500GB or a WD Black 1TB? Both are the same price.

Link to comment

That being said, should I purchase a WD RE3 500GB or a WD Black 1TB? Both are the same price.

 

WD Black 1TB.

It has 32MB cache and has more dense platters.

This will yeild a higher transfer rate and also provide the possibility of an emergency spare drive should one of your other 1TB drives fail.

Link to comment

What is the bitrate of the transfers?  YOu copy 1MB and it takes 15min?

 

 

So how much do you want to spend to get a 10% increase in speed?

 

The point of this exercise, is to show that it is pointless to waste money on an SSD for a cache disk.  You LAN speed can't keep up with it.  Your source disk can't keep up with it.  You don't need it.  It is like buying a digital camera with 24MP when no lens you can put on that camera can resolve to even half that resolution.

 

If you are copying files, it is even a bigger waste of money.  Start the copy, and minimize it and go on about other chores.  You don't babysit a file copy.

 

Ask yourself this... does the write speed you have actually delay you from doing something in a way that you lose that time, or the time lost is not available to another task?

 

Copying a DVD in 3 minutes instead of 4 is a bragging right... not a necessity (unless someone is bringing you a box of 50 DVDs to rip, and you have to return them in 8 hours, but when speed counts, copy to a LOCAL disk, then move to unRAID later).

 

Now if the difference is copying it in 4 seconds versus 4 minutes, that's nice, but again, at what cost?

 

If someone wastes time twiddling thumbs watching a file copy for 4 minutes, there a whole lot more wrong than a lack of a cache disk.

 

This isn't to say there is never justification for spending $$$ for a marginal speed increase for writing to unRAID -- I just don't see it except in very rare circumstances, given that the improvement is only marginal, and not an order of magnitude.

Link to comment

I understand that the SSD isn't worth the cost because its full potential isn't utilized. But I was under the impression that implementing a cache drive would increase my uploading at least double since it isn't writing to parity at first.

 

I do all of my work on my laptop. Some mornings I do find myself waiting for a transfer to complete before I can leave the house with my laptop.

 

Lets say without a cache disk I'm copying some movies over to the server and its going to take an hour. During that hour I want to stream a blu-ray disk to my HTPC and my fiancee wants to upload and download her latest batch of photos. My friend also wants to watch a movie and upload his music to the server.

 

Would the benefits of the cache disk now become apparent by getting the uploading tasks finished sooner and freeing up bandwidth and equipment resources sooner?

Link to comment

So how much do you want to spend to get a 10% increase in speed?

 

The point of this exercise, is to show that it is pointless to waste money on an SSD for a cache disk.  You LAN speed can't keep up with it.  Your source disk can't keep up with it.  You don't need it.  It is like buying a digital camera with 24MP when no lens you can put on that camera can resolve to even half that resolution.

...

This isn't to say there is never justification for spending $$$ for a marginal speed increase for writing to unRAID -- I just don't see it except in very rare circumstances, given that the improvement is only marginal, and not an order of magnitude.

 

Actually I was thinking of having one so writes were cached onto the SSD, without triggering spinups of drives until there was a queue of writes to be moved.

 

For me, it wasn't about speed, it was purely about dropping files off without a spin up.

Link to comment
Would the benefits of the cache disk now become apparent by getting the uploading tasks finished sooner and freeing up bandwidth and equipment resources sooner?

 

The only thing it will free up is the workstation dropping the files off at a faster rate then a parity protected drive.

As I mentioned in the other post.. my thoughts on an SSD cache would be for preventing spin ups completely.

 

If I were doing it for speed alone a 1.5TB seagate is about the fastest cost effective drive outside of an SSD..

And it's plenty fast enough to handle the lan speed and work of a cache drive.

It can also do double duty as a spare drive should a data drive die and you want to rebuild it as soon as possible.

 

You can live without a cache drive.. would you want to live without parity protection should a drive fail?

 

Link to comment

As I said before, there is no need to put something faster than your network or faster than your source disk, on the receiving end.

 

You can set the cache disk to never spin down... particularly appropriate is putting the cache on the same drive as the OS and swap.... which you never want to spin down anyway.

 

So what kind of upload speeds would I see with the cache drive implemented?

 

It depends on many variables.

Link to comment

Ballpark figures.

 

I know that right now I'm getting average upload speeds of 14MB/sec which is from what I understand as good as it gets for Unraid writes. Now I insert a 1GB WD Caviar Black (as weebo suggests) into array. I assign it as cache drive. I now upload a file with the cache drive setup. What should I typically see now for upload speeds.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...