Better to have Docker and VMs on a separate SSD or a Cache SSD?


Recommended Posts

I have a 18TB Array with a 250GB SSD cache drive.

I have a another 240GB SSD sitting around that I was going to put in for a second cache drive.

 

I'm planning on running dockers for plex, sabnzbd, etc, along with potentially some VMs for Windows.

 

What is the best practice for this? As far as I can tell, I see these options:

 

[*]Add second SSD as cache drive, and create the docker and VM user shares as "cache only" so they sit on the SSDs

[*]Add second SSD using the [ur=http://lime-technology.com/forum/index.php?topic=45807.0]UnAssigned Devices Plugin[/url], and use it exclusively for docker and VMs

 

I doubt I'll have 240GB worth of docker and VMs so option 2 seems less desirable than option 1.

Link to comment

I still do not fully understand why people are using disks outside of the array... Put the SSD's as cachedrive with  BTRFS... You might need to invest a bit more though, the ssd's are pooled in a raid-10 like fashion..

I use a 512GB HDD for my cache, and a 512GB SSD (mounted outside the array) for my VMs.  Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties.

Link to comment

I still do not fully understand why people are using disks outside of the array... Put the SSD's as cachedrive with  BTRFS... You might need to invest a bit more though, the ssd's are pooled in a raid-10 like fashion..

I use a 512MB HDD for my cache, and a 512MB SSD (mounted outside the array) for my VMs.  Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties.

Where did you get so small HDD and SSD?  :P hehe, hope you meant GB  ::)

Link to comment

I still do not fully understand why people are using disks outside of the array... Put the SSD's as cachedrive with  BTRFS... You might need to invest a bit more though, the ssd's are pooled in a raid-10 like fashion..

I use a 512MB HDD for my cache, and a 512MB SSD (mounted outside the array) for my VMs.  Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties.

 

Ok I get that... Never thought of that possibility..  Thanks !

Link to comment

I use a 512GB HDD for my cache, and a 512GB SSD (mounted outside the array) for my VMs.  Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties.

 

Why didn't you go with SSDs for both?  Cost?

 

For my original question, I'm talking about whether I do Two SSDs in cache pool, also doubling duty for my docker and VMs.

 

other alternative is 1 SSD for cache, and 1 SSD not in cache, just dedicated to Docker and VMs.

Link to comment

I use a 512GB HDD for my cache, and a 512GB SSD (mounted outside the array) for my VMs.  Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties.

 

Why didn't you go with SSDs for both?  Cost?

I already had the 512GB HDD that I am using, and it is too small to function effectively as a array drive.  When I get around to replacing it the I will go with a SSD but it seems pointless to rush into at this stage as I get no benefit from doing so when I am not running apps from it.

Link to comment

I discussed this today with another user and also linked to some posts I have been involved in on this issue as well as some links to other relevant things (e.g. options to run Cache Pool as RAID-0):

 

http://lime-technology.com/forum/index.php?topic=47722.0

 

For the record, in the past, I mounted a drive outside of unRAID BUT ultimately (at a later stage) I felt the logic I applied when making the decision was flawed. I now run my Dockers, VM's etc from a BTRFS RAID-1 Cache Pool.

 

For your consideration in the decision making process.

Link to comment

I have a 512 gig ssd i use for my cache. I have my main vms on my cache and docker.

I do however have a 1tb hdd mounted outside of the array where i store vdisk images. For example i keep my game data

on a 512 gig vdisk. The advantage for me to do this is i can keep my win 10 vm on cache taking 30 gigs so main "boot" C drive

is on ssd. The 512 is mounted on vm as drive D (mechanical drive).

This way i can share the game data between 2 vms. So i can have the same D drive attached to a windows 7 vm. This way i save alot of space. Also to backup a vm is only 30 gigs.

I wouldnt want to store my game data on a vdisk on the array as i dont want it to be using parity as will effect performance.

Link to comment

I have a 512 gig ssd i use for my cache. I have my main vms on my cache and docker.

I do however have a 1tb hdd mounted outside of the array where i store vdisk images. For example i keep my game data

on a 512 gig vdisk. The advantage for me to do this is i can keep my win 10 vm on cache taking 30 gigs so main "boot" C drive

is on ssd. The 512 is mounted on vm as drive D (mechanical drive).

This way i can share the game data between 2 vms. So i can have the same D drive attached to a windows 7 vm. This way i save alot of space. Also to backup a vm is only 30 gigs.

I wouldnt want to store my game data on a vdisk on the array as i dont want it to be using parity as will effect performance.

 

I dont post this to contradict you BUT only to fuel the conversation. Jonp seemed to suggest that storing static (e.g Non OS files) was fine. Not sure BUT once loaded aren't games run from memory?

 

Getting out of my comfort zone now. I just play them! But the link is below:

 

https://lime-technology.com/forum/index.php?topic=45315.msg432968#msg432968

Link to comment

I have a 512 gig ssd i use for my cache. I have my main vms on my cache and docker.

I do however have a 1tb hdd mounted outside of the array where i store vdisk images. For example i keep my game data

on a 512 gig vdisk. The advantage for me to do this is i can keep my win 10 vm on cache taking 30 gigs so main "boot" C drive

is on ssd. The 512 is mounted on vm as drive D (mechanical drive).

This way i can share the game data between 2 vms. So i can have the same D drive attached to a windows 7 vm. This way i save alot of space. Also to backup a vm is only 30 gigs.

I wouldnt want to store my game data on a vdisk on the array as i dont want it to be using parity as will effect performance.

 

Do note that there only is a performance impact on writing files.. not on reading.. I would suggest trying it with the second vdisk on the array itself...

 

I do  the same you do, boot drive vdisk on the ssd cache drive, the secundary drive not on the cache drive (and in my case on the array).. I do it for my windows VM and do not notice any performance issue...

Link to comment

I have a 512 gig ssd i use for my cache. I have my main vms on my cache and docker.

I do however have a 1tb hdd mounted outside of the array where i store vdisk images. For example i keep my game data

on a 512 gig vdisk. The advantage for me to do this is i can keep my win 10 vm on cache taking 30 gigs so main "boot" C drive

is on ssd. The 512 is mounted on vm as drive D (mechanical drive).

This way i can share the game data between 2 vms. So i can have the same D drive attached to a windows 7 vm. This way i save alot of space. Also to backup a vm is only 30 gigs.

I wouldnt want to store my game data on a vdisk on the array as i dont want it to be using parity as will effect performance.

 

Do note that there only is a performance impact on writing files.. not on reading.. I would suggest trying it with the second vdisk on the array itself...

 

I do  the same you do, boot drive vdisk on the ssd cache drive, the secundary drive not on the cache drive (and in my case on the array).. I do it for my windows VM and do not notice any performance issue...

 

Thanks for the input guys. Yes i guess its only the writes that would be slower.

I think I will change to do the same way as you and use the array. Plus it would make more sense to have the additional space on that drive that isnt occupied by vdisk images to be available to  the array aswell.

 

On a side note the drive i currently use for the vm data is a 2.5, 5400rpm drive. All my array drives are 3.5 7200. I guess having a 5400 drive wouldnt slow the array, as data isnt spread across drives like raid, but stored on individual drives. Or should i just swap it for a 7200 drive?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.