tmchow Posted March 21, 2016 Share Posted March 21, 2016 I have a 18TB Array with a 250GB SSD cache drive. I have a another 240GB SSD sitting around that I was going to put in for a second cache drive. I'm planning on running dockers for plex, sabnzbd, etc, along with potentially some VMs for Windows. What is the best practice for this? As far as I can tell, I see these options: [*]Add second SSD as cache drive, and create the docker and VM user shares as "cache only" so they sit on the SSDs [*]Add second SSD using the [ur=http://lime-technology.com/forum/index.php?topic=45807.0]UnAssigned Devices Plugin[/url], and use it exclusively for docker and VMs I doubt I'll have 240GB worth of docker and VMs so option 2 seems less desirable than option 1. Quote Link to comment
chip Posted March 21, 2016 Share Posted March 21, 2016 I have two 180 ssds one with Dockers and one with 1 vm. I only went this route because I had two drives and I think I saw some recommendations to run VMs seperate from Dockers if possible. Quote Link to comment
Helmonder Posted March 21, 2016 Share Posted March 21, 2016 I still do not fully understand why people are using disks outside of the array... Put the SSD's as cachedrive with BTRFS... You might need to invest a bit more though, the ssd's are pooled in a raid-10 like fashion.. Quote Link to comment
itimpi Posted March 21, 2016 Share Posted March 21, 2016 I still do not fully understand why people are using disks outside of the array... Put the SSD's as cachedrive with BTRFS... You might need to invest a bit more though, the ssd's are pooled in a raid-10 like fashion.. I use a 512GB HDD for my cache, and a 512GB SSD (mounted outside the array) for my VMs. Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties. Quote Link to comment
Bjonness406 Posted March 21, 2016 Share Posted March 21, 2016 I still do not fully understand why people are using disks outside of the array... Put the SSD's as cachedrive with BTRFS... You might need to invest a bit more though, the ssd's are pooled in a raid-10 like fashion.. I use a 512MB HDD for my cache, and a 512MB SSD (mounted outside the array) for my VMs. Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties. Where did you get so small HDD and SSD? hehe, hope you meant GB Quote Link to comment
Helmonder Posted March 21, 2016 Share Posted March 21, 2016 I still do not fully understand why people are using disks outside of the array... Put the SSD's as cachedrive with BTRFS... You might need to invest a bit more though, the ssd's are pooled in a raid-10 like fashion.. I use a 512MB HDD for my cache, and a 512MB SSD (mounted outside the array) for my VMs. Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties. Ok I get that... Never thought of that possibility.. Thanks ! Quote Link to comment
tmchow Posted March 21, 2016 Author Share Posted March 21, 2016 I use a 512GB HDD for my cache, and a 512GB SSD (mounted outside the array) for my VMs. Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties. Why didn't you go with SSDs for both? Cost? For my original question, I'm talking about whether I do Two SSDs in cache pool, also doubling duty for my docker and VMs. other alternative is 1 SSD for cache, and 1 SSD not in cache, just dedicated to Docker and VMs. Quote Link to comment
itimpi Posted March 21, 2016 Share Posted March 21, 2016 I use a 512GB HDD for my cache, and a 512GB SSD (mounted outside the array) for my VMs. Adding the SSD to the cache would degrade performance because of the presence of the HDD drive, and although the SSD is big enough for my VMs it is not big enough to carry out both VM and caching duties. Why didn't you go with SSDs for both? Cost? I already had the 512GB HDD that I am using, and it is too small to function effectively as a array drive. When I get around to replacing it the I will go with a SSD but it seems pointless to rush into at this stage as I get no benefit from doing so when I am not running apps from it. Quote Link to comment
Furby8704 Posted March 21, 2016 Share Posted March 21, 2016 i have 2x250gb ssd as cache for dockers/data and a seperate 64gb ssd unassign for my Windows VM. i havent noticed a drop in performance Quote Link to comment
stchas Posted March 22, 2016 Share Posted March 22, 2016 I'm using a single 250GB SSD for cache, docker apps, and a Win7 VM. Seems to work just fine. Quote Link to comment
ratosaude Posted March 22, 2016 Share Posted March 22, 2016 I have the same doubt How did you set the SSD directly to the VM? And if SSD is where windows is installed? What are your disk settings? i have 5 disks and wanna use 2 of it for my gaming vms Quote Link to comment
Helmonder Posted March 22, 2016 Share Posted March 22, 2016 I have 4 250GB SSD's running in a 512GB btrfs cache pool. It serves as location for my configs, vm's and as a cache drive. Works flawlessly.. I am runing 3 vm's (2 * ubuntu, 1*win10). Quote Link to comment
danioj Posted March 22, 2016 Share Posted March 22, 2016 I discussed this today with another user and also linked to some posts I have been involved in on this issue as well as some links to other relevant things (e.g. options to run Cache Pool as RAID-0): http://lime-technology.com/forum/index.php?topic=47722.0 For the record, in the past, I mounted a drive outside of unRAID BUT ultimately (at a later stage) I felt the logic I applied when making the decision was flawed. I now run my Dockers, VM's etc from a BTRFS RAID-1 Cache Pool. For your consideration in the decision making process. Quote Link to comment
SpaceInvaderOne Posted March 22, 2016 Share Posted March 22, 2016 I have a 512 gig ssd i use for my cache. I have my main vms on my cache and docker. I do however have a 1tb hdd mounted outside of the array where i store vdisk images. For example i keep my game data on a 512 gig vdisk. The advantage for me to do this is i can keep my win 10 vm on cache taking 30 gigs so main "boot" C drive is on ssd. The 512 is mounted on vm as drive D (mechanical drive). This way i can share the game data between 2 vms. So i can have the same D drive attached to a windows 7 vm. This way i save alot of space. Also to backup a vm is only 30 gigs. I wouldnt want to store my game data on a vdisk on the array as i dont want it to be using parity as will effect performance. Quote Link to comment
danioj Posted March 22, 2016 Share Posted March 22, 2016 I have a 512 gig ssd i use for my cache. I have my main vms on my cache and docker. I do however have a 1tb hdd mounted outside of the array where i store vdisk images. For example i keep my game data on a 512 gig vdisk. The advantage for me to do this is i can keep my win 10 vm on cache taking 30 gigs so main "boot" C drive is on ssd. The 512 is mounted on vm as drive D (mechanical drive). This way i can share the game data between 2 vms. So i can have the same D drive attached to a windows 7 vm. This way i save alot of space. Also to backup a vm is only 30 gigs. I wouldnt want to store my game data on a vdisk on the array as i dont want it to be using parity as will effect performance. I dont post this to contradict you BUT only to fuel the conversation. Jonp seemed to suggest that storing static (e.g Non OS files) was fine. Not sure BUT once loaded aren't games run from memory? Getting out of my comfort zone now. I just play them! But the link is below: https://lime-technology.com/forum/index.php?topic=45315.msg432968#msg432968 Quote Link to comment
Helmonder Posted March 22, 2016 Share Posted March 22, 2016 I have a 512 gig ssd i use for my cache. I have my main vms on my cache and docker. I do however have a 1tb hdd mounted outside of the array where i store vdisk images. For example i keep my game data on a 512 gig vdisk. The advantage for me to do this is i can keep my win 10 vm on cache taking 30 gigs so main "boot" C drive is on ssd. The 512 is mounted on vm as drive D (mechanical drive). This way i can share the game data between 2 vms. So i can have the same D drive attached to a windows 7 vm. This way i save alot of space. Also to backup a vm is only 30 gigs. I wouldnt want to store my game data on a vdisk on the array as i dont want it to be using parity as will effect performance. Do note that there only is a performance impact on writing files.. not on reading.. I would suggest trying it with the second vdisk on the array itself... I do the same you do, boot drive vdisk on the ssd cache drive, the secundary drive not on the cache drive (and in my case on the array).. I do it for my windows VM and do not notice any performance issue... Quote Link to comment
SpaceInvaderOne Posted March 23, 2016 Share Posted March 23, 2016 I have a 512 gig ssd i use for my cache. I have my main vms on my cache and docker. I do however have a 1tb hdd mounted outside of the array where i store vdisk images. For example i keep my game data on a 512 gig vdisk. The advantage for me to do this is i can keep my win 10 vm on cache taking 30 gigs so main "boot" C drive is on ssd. The 512 is mounted on vm as drive D (mechanical drive). This way i can share the game data between 2 vms. So i can have the same D drive attached to a windows 7 vm. This way i save alot of space. Also to backup a vm is only 30 gigs. I wouldnt want to store my game data on a vdisk on the array as i dont want it to be using parity as will effect performance. Do note that there only is a performance impact on writing files.. not on reading.. I would suggest trying it with the second vdisk on the array itself... I do the same you do, boot drive vdisk on the ssd cache drive, the secundary drive not on the cache drive (and in my case on the array).. I do it for my windows VM and do not notice any performance issue... Thanks for the input guys. Yes i guess its only the writes that would be slower. I think I will change to do the same way as you and use the array. Plus it would make more sense to have the additional space on that drive that isnt occupied by vdisk images to be available to the array aswell. On a side note the drive i currently use for the vm data is a 2.5, 5400rpm drive. All my array drives are 3.5 7200. I guess having a 5400 drive wouldnt slow the array, as data isnt spread across drives like raid, but stored on individual drives. Or should i just swap it for a 7200 drive? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.