Best use of SSD


Recommended Posts

I have a fairly old 30 GB SSD hanging round and a spare SATA port in the server so was thinking about the best option to put it to use.

 

I had noticed that my parity and disk 1 never spin down and I believe this is down to some Docker containers I am running (and presumably them logging to /mnt/apps in the array).   I guess simply adding the device to the array wouldn't stop the constant activity on the parity drive so not sure this is that worthwhile?

 

The server workload is mainly media storage and Plex with a few application

  • Ubiquiti Unifi Controller
  • Open VPN AS

 

What is my best option to use the drive?

Link to comment
1 hour ago, jameson_uk said:

What is my best option to use the drive?

I assume you currently have no SSD cache drive?

 

Although, at 30GB it is rather small, the SSD could be added as a cache drive with the sole use (no user share write caching) being to house the docker.img file and the appdata share which you would set to cache prefer.

 

Other shares commonly stored on the cache drive are isos and system although these have to do with VMs.  If you have no VMs, these would be empty so they are not taking up much space.

 

You might want to check the current size of your appdata share to make sure that 30GB is sufficient.  It should be if you only have the docker containers mentioned.

 

Having the appdata share on an SSD cache drive allows the array drives to spin down when not in use for user share data access or writing.

  • Like 1
Link to comment
On 6/14/2020 at 3:46 PM, Hoopster said:

I assume you currently have no SSD cache drive?

 

Although, at 30GB it is rather small, the SSD could be added as a cache drive with the sole use (no user share write caching) being to house the docker.img file and the appdata share which you would set to cache prefer.

 

Having the appdata share on an SSD cache drive allows the array drives to spin down when not in use for user share data access or writing.

I set this up and everything is working but the drives are still spinning.

I reset the drive stats but there are still some reads and writes to disks 1 & 3 which is stopping them spinning down.

Unraid.png.1bead3c3291b3c90c90736cb5b696cf3.png

 

There is nothing connected to the array but these docker containers are running.  Any ideas how I can track down what is actually accessing the disks?

Link to comment
10 minutes ago, jameson_uk said:

I set this up and everything is working but the drives are still spinning.

I reset the drive stats but there are still some reads and writes to disks 1 & 3 which is stopping them spinning down.

Unraid.png.1bead3c3291b3c90c90736cb5b696cf3.png

 

There is nothing connected to the array but these docker containers are running.  Any ideas how I can track down what is actually accessing the disks?

Take a look at the file activity plugin from the community apps section.

Edited by T0a
  • Thanks 1
Link to comment
3 hours ago, T0a said:

Take a look at the file activity plugin from the community apps section.

I got nothing at all 😕

This did however solve it.   appdata is excluded from the plugin due to the amount of logging, turns out I forgot to copy part of my appdata share over to the cache drive.   Running the mover job now.

 

Hopefully that should solve it.

Link to comment
1 hour ago, jameson_uk said:

turns out I forgot to copy part of my appdata share over to the cache drive.   Running the mover job now.

Just make sure you have Docker service disabled (so there are no open files) and appdata set to "prefer" as this will cause all appdata files to be moved from the array to the cache drive.

 

After all files are moved, set the share to cache "only" as this will ensure no appdata files are ever stored on the array.

 

Of course, for you, the danger with cache "only" is that if the small cache drive runs out of space, docker containers will fail.

 

Leaving appdata at prefer would allow extra appdata files to spill over to the array when there is no space on the cache drive but, you would have spinning array disks again.

Edited by Hoopster
Link to comment
27 minutes ago, Hoopster said:

Leaving appdata at prefer would allow extra appdata files to spill over to the array when there is no space on appdata, but, you would have spinning array disks again.

The disks would only spin when it synced (overnight?) or the space ran out??   They shouldn't spin up at other times would they?

Leaving at prefer should mean that I have a backup on the array that is at most 24 hours out of date?

Edited by jameson_uk
Link to comment
3 minutes ago, jameson_uk said:

The disks would only spin when it synced (overnight?) or the space ran out??   They shouldn't spin up at other times would they?

Leaving at prefer should mean that I have a backup on the array that is at most 24 hours out of date?

With appdata share set to cache "prefer" unRAID will write all appdata files to the cache drive as long as a cache drive exists and there is space on the cache drive.  When it runs out of space, any files that do not fit on the cache drive will be written to the array.  In this case, you will have appdata files in both places; cache drive and array.  Of course, array disks would have to spin up so unRAID can write spillover files to them.

 

In this scenario (cache prefer), the Mover will attempt to move appdata files back to the cache drive if space is freed up on the cache drive.  If there is no space available it will, of course, leave the "overflow" appdata files on the array.

 

Anytime the subset of appdata files on the array need to be accessed for a read or write operation, the affected array disk(s) will be spinning.

 

If all appdata files fit on the cache drive, then no docker container activity will cause array disks to be spun up unless that container is configured to access array data as part of its operation.

 

Cache prefer does not cause a "backup" of the appdata share to be stored on the array.  It is used as a spillover location so the complete appdata share is comprised of files on both the cache drive and the array; however only one copy of each file exits either on the cache drive or on the array.

 

A complete backup of appdata is done through the CA Appdata Backup/Restore plugin.

Link to comment
59 minutes ago, Hoopster said:

With appdata share set to cache "prefer" unRAID will write all appdata files to the cache drive as long as a cache drive exists and there is space on the cache drive.  When it runs out of space, any files that do not fit on the cache drive will be written to the array.  In this case, you will have appdata files in both places; cache drive and array.  Of course, array disks would have to spin up so unRAID can write spillover files to them.

 

In this scenario (cache prefer), the Mover will attempt to move appdata files back to the cache drive if space is freed up on the cache drive.  If there is no space available it will, of course, leave the "overflow" appdata files on the array.

 

Anytime the subset of appdata files on the array need to be accessed for a read or write operation, the affected array disk(s) will be spinning.

 

If all appdata files fit on the cache drive, then no docker container activity will cause array disks to be spun up unless that container is configured to access array data as part of its operation.

 

Cache prefer does not cause a "backup" of the appdata share to be stored on the array.  It is used as a spillover location so the complete appdata share is comprised of files on both the cache drive and the array; however only one copy of each file exits either on the cache drive or on the array.

 

A complete backup of appdata is done through the CA Appdata Backup/Restore plugin.

I think I have misunderstood then.   The wiki states 

Quote

In order to prevent the cache disk from filling up, a utility called the ‘mover’ will move objects from the cache disk to the array proper. You can set a schedule that defines when the mover will ‘wake up’. The default schedule is to wake up at 3:40AM every day.

which I read as all reads and write would take place on the cache.   When mover runs it would sync up the array.   If you were to ever loose the cache drive you would be left with appdata on the array and you would loose any writes since last time the cache was synced.

 

I also came across something on Reddit which attempts to explain the difference between yes and prefer and has confused me even more

Quote

For posterity: "Yes" means put on cache, move to array when mover runs; "Prefer" means put on cache, keep on cache if possible, otherwise move to array when mover runs.

After running the mover script I did see some appdata files on Disk 1 but I think this is because one of the docker containers didn't shutdown cleanly.   I have now killed everything run the script again and now appdata is only on cache.

 

If I change to yes, does this mean that writes are applied to the cache and then writer puts them onto the array?   If so does that mean that reads are going to be from array as only the latest writes are stored on cache (and each time mover runs it will move whatever is on cache to array) ??

 

Is there a way to have a share on the SSD that isn't part of the array but syncs across to a share that is on the array?

Edited by jameson_uk
Link to comment
9 minutes ago, jameson_uk said:

 

Is there a way to have a share on the SSD that isn't part of the array but syncs across to a share that is on the array?

This could be accomplished with a share set to „cache only“ and the user scripts plugin. With rsync you can then copy the files time-based to an Array share.

  • Thanks 1
Link to comment

That wiki states at the very top:

Quote

Important! This page is a basic introduction to the unRAID Cache drive, but was written for v4 and v5. There is no mention of Dockers, VM's, or Cache Pools.

So the scenario you are interested in is not covered in that old wiki.

 

9 minutes ago, jameson_uk said:

For posterity: "Yes" means put on cache, move to array when mover runs; "Prefer" means put on cache, keep on cache if possible, otherwise move to array when mover runs.

Are you sure you quoted that correctly? It is wrong. Prefer means prefer to keep on cache if there is room. If there isn't room then allow it to overflow to the array, but mover will try to move it TO CACHE when there is room, since it is preferred to be on cache and not the array.

 

In actual practice, mover can't move open files, so if any do overflow to the array, you would have to stop the docker service before mover could move them.

  • Thanks 1
Link to comment

thanks, that makes sense.   A cache drive isn't caching anything but is rather just a disk that is not part of the parity array.

 

One thing that is still bothering me though is what the difference is between a cache drive and an unassigned one.  Other than being able to overflow to the array if the cache drive is full what benefit do I get from setting up the SSD as a cache drive rather than just an unassigned disk with me pointing the docker containers at the mounted unassigned drive rather than cache?

Link to comment

User shares allow folders to span disks. A cache drive is part of user shares, and mover will move to/from cache depending on settings for each user share. An unassigned drive can be shared, but is not part of user shares so no spanning and no moving.

 

Some people do set up unassigned disks for use with dockers, and if those other features of cache aren't important for your use case then that is fine.

Link to comment
2 hours ago, jameson_uk said:

One thing that is still bothering me though is what the difference is between a cache drive and an unassigned one.  Other than being able to overflow to the array if the cache drive is full what benefit do I get from setting up the SSD as a cache drive rather than just an unassigned disk with me pointing the docker containers at the mounted unassigned drive rather than cache?

Sorry for not responding earlier as I have been offline for a few hours.  @trurl has answered all your questions, so you should be good.

 

Just to be clear, the wiki is a bit outdated and does not cover all mover scenarios.  Originally, the cache drive did just cache array writes (unprotected by parity) for designated user shares so they could happen faster.  Later, the mover moved all cached data to the array. 

 

Now, with dockers and VMs, the cache drive has evolved into another use; hosting the docker appdata and VM shares (isos, domain, system) for faster access and execution and so array drives are not spun up unless needed.  There are different "rules" that govern how shares and the cache drive interact depending on how they are used.

 

If you turn on help in the shares screen, you will see this which explains how the various options work and their affect on the mover:

 

image.thumb.png.3920afe5b12adbab37cb4de8a422b132.png

My appdata share is set to cache only as the cache drive is large enough that I don't have to worry about appdata filling it up.  Also, I do not use it for caching any user share writes before moving them to the array.

 

In your case, prefer is recommended simply because your cache drive is so small and it could fill up even without user share write caching.

 

I used to have appdata on an unassigned drive.  I still have VM files there but I moved appdata back to the cache drive to make it easier to move things around with Mover if needed.

Edited by Hoopster
  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.