[Feature Request] Support multiple cache/app pools:


Recommended Posts

Create multiple cache pools:

Could we have the ability to create a second or third cache POOL, by name or number, ie for media caching & moving I might want to have a large cache (so I would like to use large SSD's but maybe a couple of steps behind the latest technology in terms of price-performance SSDs), whereas for a docker/VM-specific cache I might only need a few 10s of GB so could perhaps afford to push the boat out and buy the latest/fastest SSDs.

Thanks

 

Link to comment

Just to clarify, multiple cache pools as the traditional "cache" doesn't really compute, the original intent of the cache drive was to provide a temporary fast write location for data intended to end up on the array.

 

What you are asking for exists already as a third party plugin, snap. Alternatively, it can be done at the command line and scripting. I think this feature request would be better stated as a need for a limetech supported and maintained app drive(s) or pool(s), that wouldn't directly participate in array writes. Now that we have baked in virtualization and more app support, I think it's a good idea to separate the idea of cache and apps, possibly even with a backup routine. It would be nice to have all that supported in the main gui.

Link to comment

Heading modified to recognise the support & feedback of other contributors.

Maybe one answer is an app pool (as suggested by 'jonathanm') within the array but using SSD-based pool(s) with zero spin-up time, but I also still think, if you can have one cache pool, why not two, or three, sync'd to different folders obviously.

Would this not achieve the same as an app pool in terms of speed with resilience.

Link to comment

Posted from another thread, Tom definitely has this feature in mind, at least the part about bringing snap into the limetech supported feature list.

 

2. The cache pool - this is one or more devices organized as a btrfs "raid1" pool.  There's lots of information out there on btrfs vs. zfs.  No doubt zfs is a more mature file system, but the linux community appears highly motivated (especially lately) to make this file system absolutely robust, and most would say it's destined to be the file system of choice for linux moving forward.

 

Like data disks, the cache disk (single device pool) or cache pool can be exported on the network.  At this time we export "all or nothing" but there are plans to let you create subvolumes and export those individually as well.

 

The cache disk/pool also supports a unique feature: we are able to "cache" creation of new objects there, and then later move them off cache storage and onto the array.  The main purpose for doing this is to speed up write performance when you need it: at the time new files are being written to the server.

 

3. Ad hoc devices - these are devices not in the array or pool.  Sometimes they are referred to as "snap" devices (shared non-array partition).  Officially we don't support the use of snap devices but people do make use of them.  Eventually we will formalize this storage type though, especially for use by virtual machines.

 

OP, Can you expand more on the use case and how you see this part of your request working?

if you can have one cache pool, why not two, or three, sync'd to different folders obviously.

Why do you want to subdivide the cache pool?

Link to comment

My thoughts on use cases were:

Media Capture - Cache pool 1, 2x 500Gb SSDs - target for streamed media, ripped movies, backups, 'moved' (eg) daily to an array volume. Due to size/cost and I would not be buying the fastest/most expensive SSD's for this pool as -2nd, -3rd gen SSD's are more than adequate performance for this.

VM1 - Cache pool 2, 2x 60Gb - Ripper: WHS running MyMovies & DVD Anywhere. Not moved. Copied/sync'd back FROM an array vol and re-created only in the event of the need for a recovery/restore/refresh. Use fastest SSD's you can afford. This is a P2V project I want to do to remove a piece of hardware from my HT setup.

Docker Container. - Cache pool 3, 2x 40Gb - Running Plex, Crashdump, etc.

None of this requires spinning disks, but it does need more flexibility to group SSD's in some useable fashion.

I'm not suggesting every application requires a separate pool, just an explanation why you might get have a need for 3 or so.

 

Link to comment

My thoughts on use cases were:

Media Capture - Cache pool 1, 2x 500Gb SSDs - target for streamed media, ripped movies, backups, 'moved' (eg) daily to an array volume. Due to size/cost and I would not be buying the fastest/most expensive SSD's for this pool as -2nd, -3rd gen SSD's are more than adequate performance for this.

VM1 - Cache pool 2, 2x 60Gb - Ripper: WHS running MyMovies & DVD Anywhere. Not moved. Copied/sync'd back FROM an array vol and re-created only in the event of the need for a recovery/restore/refresh. Use fastest SSD's you can afford. This is a P2V project I want to do to remove a piece of hardware from my HT setup.

Docker Container. - Cache pool 3, 2x 40Gb - Running Plex, Crashdump, etc.

None of this requires spinning disks, but it does need more flexibility to group SSD's in some useable fashion.

I'm not suggesting every application requires a separate pool, just an explanation why you might get have a need for 3 or so.

I still see only 1 "cache" in this example. The 2nd and 3rd are not participating in caching writes destined for the array, so they would be served by Tom's "ad hoc devices".
Link to comment

So would SNAP devices that participate in user share reads, but not automatic writes work?

 

These SNAP devices could be a single manage drive/filesystem or a pool of managed drives/filesystems.

 

I know another usage of a pooled device in read mode would be the use of accelerator drives as being discussed here.

 

Accelerator drives

http://lime-technology.com/forum/index.php?topic=34434.msg320202#msg320202

Link to comment
  • 1 month later...

I would very much like 2 separate cache drives to be supported officially, or at lease a cache drive and a 'non array (app?) drive'... 1 SSD running my VM and then a HDD to actually function as the cache drive was intended.

 

Thank you,

 

Rich

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.