20 Windows 10 VMs?


Recommended Posts

The SSD would be the bottleneck, especially if it's a consumer grade SSD.

 

128GB RAM isn't a lot for 20 Windows VMs, but depending on what those VMs are supposed to do, it could be enough, e.g. basic things like browsing, office etc. Ideally, you'd have 8GB of RAM per Windows VM, that seems to be the sweet spot for a "general purpose" Windows VM. 

Link to comment
2 hours ago, horace667 said:

Greetings,

 

Can I run 20 Windows 10 VMs effectively?

 

My server has 2 Xeon E5-2660 v2 @ 10 cores each, and 128GB of ram.  I also have a 1TB SSD cache disk.

 

I would think the ram would be the bottleneck.

 

Thanks.

 


Define effectively.

 

If you expect SSD speed across all simultaneously, I think you'll run into issues (20 machines all hitting a single SSD).

If you expect decent performance across all simultaneously, I think you'll run into issues (20 single core windows machines).

Could you do it? Sure. Would anyone using one of those VM's find it "nice"? I don't think so.

Link to comment

Yeah, why not?

 

Windows 10 doesnt consume much sitting idle.  1 core without HT each and 4gb of RAM, then you still have half your machine to do something useful. 

 

You could give each of them 4 cores and 8gb when you install/update, then dial it back when you have everything set up.  ~30GB of HDD space each and you still have 1/3rd of your cache drive left over.  But then you'd have 20 VMs being VMs for the sake of being 20 VMs.  You can't actually do anything with them except VNC into each of them individually and marvel at your own excess. 

 

20x GPU passthroughs...  Nope.

20x Keyboards/mice...  Nope.

 

I actually think your NIC will be another chokepoint.  1/20th of a gigabit connection after VirtIO overhead...

 

20x Win10 licenses...  Bill Gates would say YES!

Link to comment

Use case: 20-person office. If you keep thin client cost down, might be cost-effective, even.

 

20 thin clients using RDP to access the vm's... Maybe. Assign them passthrough network addresses, DHCP or static, such as 192.168.0.xx on the local lan using network bridge br0 instead of virbr0 (teaming up more than one ethernet interface would be helpful).

 

Perhaps I'll give it a try sometime with 3 or 4 nic's teamed up to the switch. I don't have 20 thin clients to play with though. But I could see if all 20 would at least load up and have maybe 5-10 people accessing that many at the same time. Once I acquire the 2x 2TB cache ssd's (when prices fall a bit more in a month or 2), I could configure 10 vm's to use each ssd. Loop something kinda resource intensive on the unused vm's to keep them busy. 6GB of ram for each vm, with 128gig to start with, that's 120 with 8 left over for the os and overhead. Might have to turn off some/most dockers and/or plugins. Split up the cores and ht cores 10/10, one for each vm, keeping 12 for os.

 

High performance? No. Acceptable? Maybe...

Edited by ClintE
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.