what i like to see in Unraid 7: read cache SSD

Recommended Posts

Writing to the array using an NVMe SSD as a cache drive is nice and serves well on LANs with 10Gbit/s and more.

But once Mover has done his job, read performance drops down to ridiculous drive speeds.


What I like to see is another Pool, designated as a read cache for all shares (or configurable, but it does not really matter).


* if a file is requested, it is checked first, if it is on cache already

 * if yes check if it still recent (size / time of last write and so on)

    * if recent (last cache write is younger than file creation time) reading continues from the cache drive (exit here)

    * if not, delete the file from the cache SSD (no exit, continue next step as if the file would not have been on cache at all)

* if no, the free space of the cache is checked to see if the requested file would fit

  * if no, but the cache COULD hold the file, the oldest file from the cache is deleted, check is redone (loop until enough space is freed up)

  * read the file from the array, write it to the LAN, but also write it to the cache drive and write the current time to the cache too


* if a file is closed:

  * if it came from the cache:

     * update "time of last write" on the cache

(this is to let it "bubble" up to prevent it from early deletion if space is needed. Often used files will this way stay on cache for a longer period whereas files that were only ask for once will be prefered to be cleaned up)


Fairly straightforward and simple approach. The last part could be optimized by reading ahead and writing asynchoronnally, but with the current speeds for LAN and SSDs, it does not matter, the SSD is in any cases faster than the LAN.


This would not speed up the first access to the file, but the second and more would greatly be improved. And if the designated Read Cache SSD is large (like 2Tb or more), a lot of files will fit until the first delete will be necessary.


This feature could be added to the high level of the vfs file system overlay from unraid,


(the cache disk itself is disposable, even if the content gets lost due to errors, it does not matter, it is just a copy and also needs no backup. So UNRAID should not look for shares or allow to create folders on that designated cache ssd)


Update: yeah, I know, it will make file reading a bit slower (because of the additional write to the read cache), but this is almost not measurable. Reading from real disks is about 290MB/s with best conditions, writing to SATA SSDs should be almost twice as fast and writing to NVMe SSDs will be five or even more times faster. So this really does not count in.


Update2: I would like to add 2 config settings for fine tuning:

a) minimum file size: files smaller than this are never put on cache (default 0)

b) maximum file size: files larger than this are never put on cache (default: not set or 100M or so)


Update3: additionally there could be an cron driven "garbage collection" to free files from cache that have not been accessed for a certain period of time (should be a piece of cake since the read/close updates the file time, it is always recent and a simple find /mnt/readcache -atime -XXX -exec ... is enough for cleaning up)


Edited by MAM59
  • Upvote 6
Link to comment
  • 2 weeks later...

I've had a similar though. Automatic file management similar to File Juggler, Belvidere, or Droppit.  Rather than moving all cached files at once, allow users to move cached files based off user determined rules. For example, move DVR recordings older than 7 days to the array or move a video file that's been accessed twice in the past few days back onto cache.  It's possible to do this with user scripts, but a UI to configure rules would me amazing. Unfortunately I've yet to find a program that runs on Linux that offers this.

Link to comment
8 hours ago, Tjlejeune said:

Rather than moving all cached files at onc

No, sorry, this is NOT what I like to see implemented.

I would not touch the current write cache / mover thing, I like to add another SSD purely used as a read cache. So there is no "moving" on or off this drive, no user intervention (maybe the whole drive is not even visible within the filesystem, but if then "read only" and not shareble).

Just a drive with the only purpose to be read fast.


Link to comment
  • 3 weeks later...

I didn't read the post but I fully agree.


As a matter of fact a personal data storage system with VM container and media capabilities, must anticipate the end to end life of data and workloads, and Storage QoS and power management dimensions.


How is the unRAID or Linux storage systems documented in terms of high level design logic ?


Where are the diagrams?

Edited by GRRRRRRR
Link to comment
  • 3 weeks later...

+1 - L2 ARC

I am new to Unraid and am surprised that 'cache' (now pools) aren't really 'cache' they are more like user defined write pools with user defined copy/move rules [hence the move from cache terminology to pool terminology].


ZFS will change much of this and enable us to have L2 ARC amongst other things.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.