dubbly Posted January 9, 2017 Share Posted January 9, 2017 Hi All, New to VMs. Been reading through the threads. Unclear what the official opinion is here. Should I run a Win 10 gaming VM on the Cache Drive or an Unassigned drive? Secondly, when assigning the Primary vDisk Location why wouldn't I want to just use "Auto". Cheers Quote Link to comment
1812 Posted January 9, 2017 Share Posted January 9, 2017 vm image on unsigned ssd is considered best, but cache drive is still better than array. if you use auto for location, and have your /domain folder set on cache drive, then the vm image will be put there. 1 Quote Link to comment
fonts Posted January 9, 2017 Share Posted January 9, 2017 if you are running a gaming vm then i would run run the ssd with unassigned drives and have it dedicated to just that vm Quote Link to comment
trurl Posted January 9, 2017 Share Posted January 9, 2017 vm image on unsigned ssd is considered best, but cache drive is still better than array. if you use auto for location, and have your /domain folder set on cache drive, then the vm image will be put there. Not sure I would agree. Arguments can be made either way and depends on a specific user's needs. If cache is big enough, it's one less thing to configure, one less drive to buy, and one less port to use. 1 Quote Link to comment
1812 Posted January 9, 2017 Share Posted January 9, 2017 vm image on unsigned ssd is considered best, but cache drive is still better than array. if you use auto for location, and have your /domain folder set on cache drive, then the vm image will be put there. Not sure I would agree. Arguments can be made either way and depends on a specific user's needs. If cache is big enough, it's one less thing to configure, one less drive to buy, and one less port to use. using it on cache, the vm has to share drive bandwidth with any dockers that are running. I've noticed slight throughput gains using vm's on unassigned devices. Nothing to write home about, but I think others have seem more. Is it worth the extra setup (as in, adding another drive, and then clicking a new destination for the image file in vm setup)? That is up to each user. But if a problem arrises on the vm physical disk and it needs to be replaced, you don't have to stop the array to do so. and IF unRaid allows vm's to run without the array being spun up in the future, my suspicion is that it will only be run on mounted unassigned devices. But you are 100% right, arguments could be made either way. If you're using cache drive with a few drives, then the vm is saved on several disks, and if there is a drive failure, it is still operable. A benefit for some. Maybe we should do a poll, because my initial post was based on what my impressions were on the board. Could be interesting, I could be flat wrong! Quote Link to comment
trurl Posted January 9, 2017 Share Posted January 9, 2017 vm image on unsigned ssd is considered best, but cache drive is still better than array. if you use auto for location, and have your /domain folder set on cache drive, then the vm image will be put there. Not sure I would agree. Arguments can be made either way and depends on a specific user's needs. If cache is big enough, it's one less thing to configure, one less drive to buy, and one less port to use. using it on cache, the vm has to share drive bandwidth with any dockers that are running. I've noticed slight throughput gains using vm's on unassigned devices. Nothing to write home about, but I think others have seem more. Is it worth the extra setup (as in, adding another drive, and then clicking a new destination for the image file in vm setup)? That is up to each user. But if a problem arrises on the vm physical disk and it needs to be replaced, you don't have to stop the array to do so. and IF unRaid allows vm's to run without the array being spun up in the future, my suspicion is that it will only be run on mounted unassigned devices. But you are 100% right, arguments could be made either way. If you're using cache drive with a few drives, then the vm is saved on several disks, and if there is a drive failure, it is still operable. A benefit for some. Maybe we should do a poll, because my initial post was based on what my impressions were on the board. Could be interesting, I could be flat wrong! On the other hand, I don't actually run any VMs and if I did I don't have room for video cards or additional drives in my server. Quote Link to comment
1812 Posted January 9, 2017 Share Posted January 9, 2017 vm image on unsigned ssd is considered best, but cache drive is still better than array. if you use auto for location, and have your /domain folder set on cache drive, then the vm image will be put there. Not sure I would agree. Arguments can be made either way and depends on a specific user's needs. If cache is big enough, it's one less thing to configure, one less drive to buy, and one less port to use. using it on cache, the vm has to share drive bandwidth with any dockers that are running. I've noticed slight throughput gains using vm's on unassigned devices. Nothing to write home about, but I think others have seem more. Is it worth the extra setup (as in, adding another drive, and then clicking a new destination for the image file in vm setup)? That is up to each user. But if a problem arrises on the vm physical disk and it needs to be replaced, you don't have to stop the array to do so. and IF unRaid allows vm's to run without the array being spun up in the future, my suspicion is that it will only be run on mounted unassigned devices. But you are 100% right, arguments could be made either way. If you're using cache drive with a few drives, then the vm is saved on several disks, and if there is a drive failure, it is still operable. A benefit for some. Maybe we should do a poll, because my initial post was based on what my impressions were on the board. Could be interesting, I could be flat wrong! On the other hand, I don't actually run any VMs and if I did I don't have room for video cards or additional drives in my server. lol... they can be a pain. one of my graphics cars blocks 2 slots Quote Link to comment
Guest Posted September 15, 2017 Share Posted September 15, 2017 has anyone seen significant performance gains from running vms on cache using raid 0? Im currently using unassigned devices for two 480gb ssds and was wondering if it is worth going the raid 0 cache route Quote Link to comment
HellDiverUK Posted September 15, 2017 Share Posted September 15, 2017 Using a decent NVMe drive for cache with VMs on it is fine - even a lowly 256GB Samsung like the one I use can do 3100MB/s reads and 1200MB/s writes. The bigger sizes can do 3500MB/s and 2500MB/s. In my gaming rig, the NVMe 950 Pro totally slaughters the RAID0 of Crucial MX300 525GB. The MX300 array can only (!) do 1GB/s reads compared to 3.5GB/s on the Samsung. Quote Link to comment
1812 Posted September 16, 2017 Share Posted September 16, 2017 16 hours ago, antaresuk said: has anyone seen significant performance gains from running vms on cache using raid 0? Im currently using unassigned devices for two 480gb ssds and was wondering if it is worth going the raid 0 cache route Did it with 2 samsung ssds.... Hated the slow down when using dockers(and file transfers) that also accessed the cache as it has to share I/O. I also felt like the vm's stuttered a bit, and had more spinning beach balls, which I never had on a single drive. Moved images to unassigned device drives and backup to the array once a week or so. You really don't need more than 300MB/s for any standard vm to be "fast." I have another setup that uses 4 600GB 15k disks that hits about 1GB/s, and it boots in about the same time. Programs launch about the same as well vs the single samsung SSDs. At a point there are diminishing returns, or at least on my equipment. Quote Link to comment
Guest Posted September 19, 2017 Share Posted September 19, 2017 Thanks both. I had vm crash issues with raid 0 btfs so went back to xfs cache drive. Been stable since then Quote Link to comment
steve1977 Posted November 5, 2017 Share Posted November 5, 2017 Related question - any views on running two VM images on the same cache drive? Better one on cache and one on unassigned? Any material performance difference? Quote Link to comment
Kich902 Posted March 28, 2021 Share Posted March 28, 2021 On 9/15/2017 at 5:57 PM, HellDiverUK said: Using a decent NVMe drive for cache with VMs on it is fine - even a lowly 256GB Samsung like the one I use can do 3100MB/s reads and 1200MB/s writes. The bigger sizes can do 3500MB/s and 2500MB/s. In my gaming rig, the NVMe 950 Pro totally slaughters the RAID0 of Crucial MX300 525GB. The MX300 array can only (!) do 1GB/s reads compared to 3.5GB/s on the Samsung. Say i want to have 2 VMs with 250Gb for their boot drive and a cache drive, which would be ideal?: 1. Have 2x 250Gb NVMe + 1x 500Gb SATA SSD solely dedicated for cache drive 2. Have a 1Tb NVMe partitioned for the 2 VMs + 1x 500Gb SATA SSD solely dedicated for cache drive 3. Have a 1Tb NVMe for both the VMs and cache drive Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.