Jump to content

Cache drive setup // Different way of using SSD cache


outsider

Recommended Posts

I was wondering if this specific use case of the cache drive is possible:

 

Currently mot people use cache drives to improve write speed to their array and the mover script removes that data a per the scheduled settings. Using an SSD is great, but for the most part it leaves the SSD almost always empty.

All the reading from the array takes place off platter HD, which is much slower then SSD. You only read data off the SSD (at SSD speeds) up to the point before the data is moved by the mover script.

 

Is it possible to somehow setup the cache drive (and the mover script) to keep a maximum amount of data on the SSD so that both read and writes are accelerated, but at the same time the data is only synced to the array?

Somehow setup the cache drive to maintain a certain amount of free space for new writes (say 20-30% of SSD capacity?), and otherwise keep the newest stuff that is written to it on the SSD. The mover script can copy the data to the array for redundancy like it currently does (but just copy it not move it).

 

If I stick a 256 or 512GB SSD as a cache, I'd like to make use of it as much as possible, not just 10-30GB that I write to it daily.

 

What I'm thinking of (in my limited linux knowledge) is an rsync script that runs hourly and syncs the data from the SSD to the array, and maybe another script that cleans up the SSD to always maintain a certain pre-set percentage of free disk space (by clearing out the oldest modified files, and leaving only the newest files on the SSD)

 

Does that make sense? Is this possible? (has this already been done?)

Any ideas how to implement this?

Link to comment

Reading over a gigabit network is not speed limited by the drives unless you have fairly old drives. Writing to the parity protected array is much slower than reading, thus the addition of the cache drive. The only benefit I can see is possibly in either local on server access for VM's, or random small file accesses that take advantage of the much faster seek of an SSD. That possibility has been explored, and there was some work done on a proof of concept for keeping metadata type stuff on an SSD. Possibly searching the forum for accelerator drive may find something on it.

 

Link to comment

All the reading from the array takes place off platter HD, which is much slower then SSD. You only read data off the SSD (at SSD speeds) up to the point before the data is moved by the mover script.

 

 

I'm looking for this also.  I don't have my system setup yet (still in pre-clear), but I plan on eventually implementing something like this.

https://lime-technology.com/forum/index.php?topic=34434.0

 

I have a lot of small metadeta and images (Fan Art) for SageTV and plex.  I think this would speed the reading of those small files greatly.  If you have a large enough SSD drive, you may be able to also do something based on file dates.  i.e. keep the newest files on the SSD drive so you don't spin up the HD's.

 

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...