Would using VMs be appropriate for my dual-boot situation?


Recommended Posts

Hello folks - I need some advice on whether using VMs would be appropriate for my situation or not.

 

I have to stress up front that I know absolutely nothing about setting up or using VMs under unRAID so please could you tailor your suggestions on that assumption.

 

The situation I am trying to find an improvement for is my main gaming PC which is currently set up to dual boot between Linux Mint (main OS) and Windows 10 (gaming OS). I am finding the time taken to switch between the two sides is deterring me from enjoying my Steam library on the Windows drive because I am a casual gamer and tend to have short sessions during the day and evening for light gaming. Booting back and forth is a barrier (albeit slight) to my relaxed play style and I wondered if VMs for Linux and Windows might help with this or not.

 

I also have a small unRAID NAS used by my wife for serving movies to an AppleTV. It is currently well specced for that purpose in that it only ever has to serve a single 1080p stream at a time and the movies on it are already encoded for the convenience of the AppleTV. This is where my unRAID Plus license resides.

 

I am not averse to consolidating both machines into one and fortunately the majority of my hardware is already broadly ready for that purpose (apart from the lack of enough SATA ports). Given that the NAS is already set up happily for my wife, it might be better to purchase another unRAID license just for the main gaming PC.

 

I do however wonder if setting up a pair of VMs for the two OSes is using a sledgehammer to crack the proverbial nut.

 

In the short term future I intend to replace one or more of the SSDs with m.2 SSDs, probably Samsung 960EVOs. This will impact the number of SATA ports available on the motherboard. There may be an opportunity for an upgraded GPU in the medium term too.

 

The hardware in the two machines is accurate as per my current forum sig. The main gaming PC is water-cooled with an Aquaero6 and the 7700K is currently clocked at 5.0GHz at 1.30V. The motherboard in the main machine is an ASUS Maximus Formula IX with 6 SATA 6Gb/s ports (the m.2 drives will take up some of these if used). The 850PRO is the Linux drive, one 840PRO is for Linux backup and the other 840PRO is the Win10 drive (and also where the grub2 bootloader resides). If upgrading to m.2 drives I would probably have one for each OS and pass on the 840PROs to a family member.

 

My main concerns are usability and also power consumption. I would like to be able to sleep as much of the hardware and VMs as possible when not in use because I am the only user of the gaming PC and our NAS is only used occasionally by one person. There are no business applications on the machines at all - consider it home usage only. I don't know how one goes about flipping between the two VMs or how long it takes to fire one up.

 

So... do you think VMs are an appropriate tool here or are they not going to help me? What technical issues would I be likely to run into?

 

Thanks in advance.

Link to comment
47 minutes ago, DanielCoffey said:

I am finding the time taken to switch between the two sides is deterring me from enjoying my Steam library on the Windows drive because I am a casual gamer and tend to have short sessions during the day and evening for light gaming. Booting back and forth is a barrier (albeit slight) to my relaxed play style and I wondered if VMs for Linux and Windows might help with this or not.

 

 

Can you elaborate on this. It seems solving this is at the crux of the issue.

 

Is it just a PITA to shut down your Linux rig to boot up the gaming config? So you really want both available at the same time, or switching to be quick and not involve shutting down and rebooting.

 

I do expect that you could set up two VMs in unRaid, one Linux and one Windows gaming. Each passing through the same GPU and USB (for keyboard and mouse). I then think you'd be able to start up the Linux box, do whatever, and when you want to swap to gaming, hibernate the Linux VM and start the Windows gaming VM. When you are done playing games for the time being, you could hibernate the Windows VM and resume the Linux VM, basically picking up where you left off. You'd need a tablet or your phone to complete the swapping from the unRaid web GUI, but would only take a couple seconds.

 

You could also run the Linux VM without hardware passthrough, and use something like NoMachine to access it. I find this is fine for casual VM use, but that it reduces the user experience slightly and may not be fully satisfactory for frequent use. But NoMachine is pretty darn good, and you might find it works just fine for you. You'd run NoMachine from your Windows gaming VM I expect.

 

The other option is to pass through two video cards and two usb ports - one for Windows and one for Linux. You could have both up at the same time. Each video card could be plugged to a different monitor input, and your USBs could feed into an A-B switch box, to allow you to swap keyboard and mouse. This would give you a lot of convenience with switching happening near instantly. Would require a motherboard that would support the two video cards and dual USB passthrough. Seems a lot of complexity, but may be what is required to satisfy your use case.

 

The other thought that comes to mind is running something like VMware on the Windows box, and virtualizing the Linux machine in VMware. UnRaid is not involved. Then you'd always run Windows, and bring you the Linux VM as a Windows app. You can play games and then, more or less instantly switch over to the Linux VM to do your normal tasks. No hibernate / unhibernate. This can be setup for free by using the VMware Workstation 30 day trial to setup the Linux VM, and then switch to using the free VMware player to run it every day.

 

I think once you figure out the game plan, the forum can help you realize it.

 

 

Link to comment

Thanks for thinking about this for me.

 

Yes, it is the time spent restarting and rebooting throughout the day that I am trying to reduce as well as the fact that when I do go into Win10, I have to wait for the services to start up, for Kaspersky to tell me my databases are extremely out of date, that it has failed to update them for me (even though it is set to automatic) and then having to wait while I (slowly - but that is a separate issue to do with configuration) do it manually. Chances are that something else will then nag me that it wants updating. Only then am I free to actually do what I want.

 

I am a fulltime carer so while I do have a lot of time to spend on the PC, it is in small chunks of half an hour or so. I might feel like playing a Steam game from my Windows library for a short while but am finding the hassle of swapping over on a dual boot setup a deterrent so I tend to just putter about on the Linux side not doing very much until my half hour is over.

 

If I were to use unRAID to act as the host for the two VMs, how is hibernation/sleep handled? How much of the machine would need to be running when I wasn't wanting to use a VM at that time?

 

The NAS is not used often and tends to be "by appointment". This was why I was considering leaving the NAS in its own box although I can see the attraction of putting them all together as I can then allow the NAS to act as a backup for the main machine, freeing up an SSD. I am aware that I do not have an offsite backup but it is really only for the "oops" situation rather than anything critical.

Link to comment

In your case I would personally go for the case of having the existing PC natively running Windows with Linux running under VMWare workstation hosted on the Windows System.   If you leave the Linux VM running all the time that gives you virtually instantaneous switching between Windows and Linux.  You could easily try this out on your existing system to see if it meets your needs as it needs no additional hardware.   This would also mean you leave the existing unRAID system I disturbed.

 

You could go the unraid route with both Windows and Linux VMs.    However a GPU cannot be shared between VMs that are running at the same time so you may end up using a RDP connection from one VM to the other in this scenario anyway.   This would be the way you might want to go if you have a strong desire to consolidate down to a single physical machine.

Link to comment

Another option would be to set up a daily driver linux as a VM on your current unraid box. This would be accessed using a remote network connection on your Windows box, there would be no passthrough on the server.

 

No penalties for evaluating this as an option, you can leave everything set up exactly as you have it now, add a VM to the unraid box and log into it using either your current Mint install or Windows. I must warn you though, if you need audio on your daily driver it's a little difficult to accomplish on a headless linux VM. It's a little easier on a remote windows VM with RDP, but there is no good equivalent for linux that I have found.

 

If you do this, be sure you use a proper VNC client to connect to it, the NoVNC browser client has issues. You may experience a slightly better connection if you enable the VNC server in the VM itself instead of using unraid's VNC console address, but you won't be able to manage the boot process with that connection.

 

 

Link to comment

Thanks for the suggestions folks - it has been a useful education into the sorts of things a VM can help me with and the associated issues.

 

I can see that most of the use cases here have pros and cons. Some might require multiple GPUs, others require software licenses and remembering to sleep both the VM and the host every day. I can't really use the existing unRAID box for the Linux daily PC as it would be running off the i3 iGPU which would not be sufficient for Linux gaming which to be fair I had forgot to mention - apologies. I have about a third of my Steam library on Linux and the rest of the big games on Windows.

 

As I said, I appreciate the education.

 

What I think I may consider is waiting until I have the new GPU, keeping my current 780ti instead of passing it on to family and allowing the current case to have both GPUs. Once I am in that position, I will be back in touch with specific questions.

 

Thanks.

Link to comment

If you need high power GPU for both Linux and Windows, the first option I laid out with hibernation may be the best. "Hibernate" is an option on the unRAID KVM option and is not the same as hibernating Windows. You just click the vm, the menu pops up, and you click "Hibernate" (see screen capture below). It takes just a couple seconds, and you can wake it up just as fast when you want to resume it. Once hibernated, you could pass through the GPU and USB it was using to another VM - mainly your Linux one. This is exactly what you want, because you want to keep using the same monitor the keyboard/mouse. I think this would be a very good option for one video card and one machine.

 

With a trial license you could try this out. Look into options to virtualize a physical machine. I like Acronis but there are other options.

 

Good luck!

 

VMmenu.PNG.dd05ed26a843d2bbbffdfb5d375637fe.PNG

  • Upvote 1
Link to comment

Hmm... that sounds exactly like something I could try now to see how quick the hibernate/wake swap is.

 

Out of interest, if I wanted to do this with unRAID Trial (on a new USB stick I assume), one Mint VM and one Win10 VM, how many drives would I need to scrounge together and which would benefit from being SSDs? Assume I leave my existing NAS alone for now and just have this as a basic unRAID host of the two VMs.

Link to comment
8 hours ago, bjp999 said:

VMware on the Windows box, and virtualizing the Linux machine in VMware. UnRaid is not involved.

 

Really what bjp999 is the best advice in techinical terms for exactly what you asked for. Keep you're current windows installation, add VMware. You can migrate your m.2 Mint OS installation into a virtual disk within your Windows m.2 drive, and if you're happy with the performance (can't imagine why you would given your use case) that would free up the current m.2 that you have Mint OS installed on. Not only is this the best for your use case, it also gives you back a whole m.2 drive back out of this scenario.

 

Technical problem solved, however you got some deeper psychological questions to ask yourself. You say you want ot play games, but your real focus seems like you like tinkering and homelab type stuff. Ask yourself these questions.... I'm not looking to know the answers, but maybe they'll help you.

 

Do you spend less than 3 hours a day actually playing games (tweaking/overclocking do not count)? Compared to gaming, is more more of your time spent overclocking, tweaking and experimenting? Is it easier to tell your wife that you need to go do these activities, or spend money on computer equipment, because you're improving household things like the NAS vs. playing games? 

 

If you answered yes to all three, then by all means welcome to the club! We will be seeing a lot of you around here! Get UnRAID on that USB and start to figure out how to pass-thru your existing drives to VMs, it'll be a fun journey!

 

 

Link to comment
2 hours ago, DanielCoffey said:

Hmm... that sounds exactly like something I could try now to see how quick the hibernate/wake swap is.

 

Out of interest, if I wanted to do this with unRAID Trial (on a new USB stick I assume), one Mint VM and one Win10 VM, how many drives would I need to scrounge together and which would benefit from being SSDs? Assume I leave my existing NAS alone for now and just have this as a basic unRAID host of the two VMs.

 

Just set up disk1 as an SSD, with no parity. That should work fine for the test.

Link to comment

I'm in! USB pendrives and a pair of SAS-to-4xSATA cables ordered and an LSI SAS9201-8i on the way from a nice UK eBay seller.

 

Once I receive the 9201-8i I will check its firmware while I still have direct access to my Windows machine and see if it needs flashing or not.

 

So, as promised, here are the next set of questions...

 

1. When selecting a drive for unRAID to create a VM on, I assume there are no advantages to having each of my two VMs on its own SSD since I will be hibernating and waking the two VMs as needed. All that matters is the amount of free space that I want to allocate to the VM, yes?

 

2. Given that I have a Plus license on our NAS and that I do intend to move it over into my big case once I have the VMs sorted out, is it simple to use a new Trial license while I play around to get the VMs right then bring my Plus license stick over and "inherit" the VM settings? I assume I would have to tell my Plus license how I wanted the VMs set up but I would like to avoid having to create and install their contents from scratch.

 

3. Does the drive that the VMs live on get protected inside the Parity drive(s) at all?

 

I think that is all for now but of course there will be more once the stuff arrives.

Link to comment

1 - correct. Even if 2 were running at the same time on one SSD, doubt you'd be able to tell

 

2 - The VM is made up of two parts - the virtual disk file, and the VM configuration. The virtual disk file is a large file stored on you SSD. The configuration is easily copied from the GUI. Internally it is stored in the libvirt loopback device. Bottom line, moving the VM to another unRAID license would be easy to do.

 

3 - Not typically. Putting it on the protected array would slow down the VM. Instead, but it on your SSD cache, and back it up to the protected array from time to time.

  • Upvote 1
Link to comment

Thanks for the replies - I understand more about how it will fit together now.

 

Quick question about old the 9201-8i which is dated 2010... would it be reasonable to assume the thermal paste under the big heatsink is totally baked dry now and replace it while I have the card on the bench? I see the heatsink appears to be held in place with the usual pair of springs and nylon flared clips. I am aware the nylon clips may be brittle. I did see a picture of one 9201 on eBay that was missing its heatsink and it appeared to be covered in what I would describe as thermal epoxy, not thermal paste. It was crusty hard yellow stuff. Are the heatsinks glued on or is it just paste?

  • Upvote 1
Link to comment

Never seen this discussed, but you have a valid point. Stripping the old material and replacing with Arctic Silver 5 or something may keep the controller chip cooler, but perhaps with some risk of damage in the process. Your call, but will say most users don't.

 

I also saw that controller with the missing HS. Kinda pissed me off as an unsuspecting user would easily buy it without a close study of the picture, and finding a HS could be a PITA! It should have been mentioned in the writeup.IMO.

 

Interesting you should mention this, as I recently did an upgrade and am now using an older controller that was bought used and has been shelved for several years. Probably considerably older then 2010! I noticed last night it was running at 47C, which seemed hot. It's actually mounted quite closely to a GPU, which I assumed to be the reason for the heat. It has a fan installed on the HS as it must run hot anyway. Maybe I'll try replacing the thermal compound and see if it cools better.

 

Thanks for a good though! Post your results if you do it! 

  • Upvote 1
Link to comment

After just experiencing the death of one of my well proven 9201-8i cards last month due to no known cause, all I'm left with is to suspect age/heat are to blame. I'm curious to know as well about what you find with the thermal paste and if it's worth the effort to replace.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.