Jump to content

Sizing a cache disk


sane

Recommended Posts

Being as UNRAID 6 is homing in on some stability - I'm looking to move from UNRAID 5 to 6 this weekend, maybe.

 

I've not bothered with a cache disk in the past, since I very rarely write more then 10GB in any one day. However with 6 I'll need some space for docker, VMs, etc.

 

Some questions arise

[*]Is there somewhere a sizing calculator that says how large a disk you need for what combination of capabilities? (eg how big should the docker bit be, how much space for a VM, etc. I've got an old 64GB SSD lying around and like to at least start off with that. (or a 250GB HDD if needed)

[*]If you attempt to copy more than the available cache space to the array, does it just fall back to writing to the array, or do the wheels come off?

[*]As far as VMs/docker is concerned, can the virtualised OS use the data array drives for, well data, apps, etc?

Link to comment

I have a 250GB HD as my cache drive, but its usage is never over 35GB.  I don't use the cache drive for write caching of my user shares, though - writing directly to the array is fast enough for my needs.  My Docker file is 20GB for several Dockers and it has lots of space available.

 

My Dockers write data both to the cache-only appdata share as well as user shares.  So, for instance, SABnzdb writes temporary files to the cache-only appdata share and SickBeard picks them up and writes them permanently to data drives.  Both Dockers write configuration data into the Docker image.

 

64GB would probably be plenty for Docker work.  If you want to support write-caching of your user shares then you should size the drive for the maximum amount of data you want to write, plus Docker usage.  If I recall the posts I've read, copying data to a write-cached share would fail if the cache drive is under-sized.

Link to comment

I have a 250GB HD as my cache drive, but its usage is never over 35GB.  I don't use the cache drive for write caching of my user shares, though - writing directly to the array is fast enough for my needs.  My Docker file is 20GB for several Dockers and it has lots of space available.

 

My Dockers write data both to the cache-only appdata share as well as user shares.  So, for instance, SABnzdb writes temporary files to the cache-only appdata share and SickBeard picks them up and writes them permanently to data drives.  Both Dockers write configuration data into the Docker image.

 

64GB would probably be plenty for Docker work.  If you want to support write-caching of your user shares then you should size the drive for the maximum amount of data you want to write, plus Docker usage.  If I recall the posts I've read, copying data to a write-cached share would fail if the cache drive is under-sized.

 

Yeah, it appears from the references that the cache behaviour is dumb enough to fail if a file is bigger than the cache space available - which makes if virtually pointless having the cache in my opinion. It's supposed to be smarter than that if it's really going to be worthy of the name.

 

Thus some dockers and potentially a minimal VM or two sounds credible for the 64GB - although the situation with TRIM makes even that questionable.

Link to comment

Writes to a caching share will bypass the cache drive once the minimum free space setting is exceeded. The check is done before a file is written. If a file exceeds to min free space setting in size and fills the cache disk then the transfer will fail.

Link to comment

Writes to a caching share will bypass the cache drive once the minimum free space setting is exceeded. The check is done before a file is written. If a file exceeds to min free space setting in size and fills the cache disk then the transfer will fail.

Which is a massive problem.

 

If the minimum free space were set to 5GB, and the file happened to be 40GB (not impossible for a blu-ray rip say) then it would copy 5GB+ and fail. No matter what you set the minimum free space number to, the possibility of hitting a single file large always exists (I've backup image files of 150GB, easily).

 

If it were really acting as a cache then it either ought to recognise the size of the file it was being asked to accept was large than the available space (if the figure was available) and bypass the cache; or if the size were not available at the start then recognise it had fallen below the minimum free space value and send the subsequent blocks to the data array, adding to them moved copies of the cached blocks prior to completing the file write.

 

It's not really a cache if it can screw up in the middle of a write.

Link to comment

The trouble is in the programs you're using to write the files. They're creating a 0 byte file first and then appending bytes to that file. If the programs preallocate the full space of the file then you wont encounter this issue.

Link to comment

Writes to a caching share will bypass the cache drive once the minimum free space setting is exceeded. The check is done before a file is written. If a file exceeds to min free space setting in size and fills the cache disk then the transfer will fail.

Which is a massive problem.

 

If the minimum free space were set to 5GB, and the file happened to be 40GB (not impossible for a blu-ray rip say) then it would copy 5GB+ and fail. No matter what you set the minimum free space number to, the possibility of hitting a single file large always exists (I've backup image files of 150GB, easily).

 

If it were really acting as a cache then it either ought to recognise the size of the file it was being asked to accept was large than the available space (if the figure was available) and bypass the cache; or if the size were not available at the start then recognise it had fallen below the minimum free space value and send the subsequent blocks to the data array, adding to them moved copies of the cached blocks prior to completing the file write.

 

It's not really a cache if it can screw up in the middle of a write.

 

There is no indication to the server of the file size. Generally, the client process writing the file to the share also has no facility to determine the size of the file it's writing as this is dependent on input. A case where the file size is known a priori is during a copy or a move. In any case, there is no facility in SMB to indicate the file size a priori.

 

In order to resolve this massive problem, the SMB standards body will need to be addressed. I suggest a strongly worded email. Having unRAID monitor free space would entail a good deal of code for a problem most do not perceive.

 

The Minimum Free Space setting should be set to twice the size of the largest file that you anticipate writing. Mine is set to 50GB. The extra free space on a data disk can be filled manually once the limit is exceeded.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...