me.so.bad Posted April 13, 2017 Share Posted April 13, 2017 (edited) Hey guys, my four VMs (GamingPC, HTPC/Office, Office #2, Office #3) suffer from high IO-Load on my cache (and even array?!) so I want to move them to their own SSD(s) – but im uncertain which way to go. Performance needs of the VMs are as follows: - GamingVM (≥70G): highest performance needed all the time, no matter what happens on other parts of the system. - HTPC/Office (50G): Movies should run smoothly, but some stuttering is okay. - Office #2 & Office #3 (50G each): performance not really important – but they are accessed from remote via RDP which should work at all times (no problem i think) Possible solutions: 1. (least expensive / most flexible / what about performance?) get one 250G SSD, mount via UD plugin and put all my VM disks there 2. (highest performance / okay-ish flexibility) get one 120G SSD and pass it through to my GamingVM. put everything else on a second 120G SSD. 3. (most expensive / least flexible) get one big physical SSD for each VM and pass-through. (nope) 4. anything else? Now I'm asking for opinions: Which way would you go? What do you think how much more performance will #2 compared to #1? Thanks a lot! me.so.bad Edited April 13, 2017 by me.so.bad Quote Link to comment
me.so.bad Posted April 13, 2017 Author Share Posted April 13, 2017 I know that video, but he only did one short test with one cmd. but what if 4 VMs simultaneously use vdisks on one SSD? is there still only a minor overhead or does it break down completely? Quote Link to comment
1812 Posted April 13, 2017 Share Posted April 13, 2017 all depends on io requirements of each vm, the ssd interface speed, and make/model of the ssd. They all share the same connection. At best they access the disk at different times and you notice almost no difference. At worst, they all access the disk at the same time, giving you something closer to spinning disk speed, or worse. Quote Link to comment
me.so.bad Posted April 16, 2017 Author Share Posted April 16, 2017 Did a bit more testing: Keep VMs on Cache shouldnt be a Problem at all. But when "the array" is under heavy load somehow thing can get laggy. not figured out what the problem exactly is. Quote Link to comment
1812 Posted April 16, 2017 Share Posted April 16, 2017 if your shares are set to use cache, and you're writing to the share, it writes to the cache drive first, limiting your disk i/o. Quote Link to comment
me.so.bad Posted April 16, 2017 Author Share Posted April 16, 2017 I know, but cache I/O wasnt the problem here. I was rsyncing from /mnt/disks/… to /mnt/disk1/… Quote Link to comment
1812 Posted April 16, 2017 Share Posted April 16, 2017 next item i'd look at is cpu pinning and total available core count, and core isolation from unRaid for the vm. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.