Converting my bare metal Win10 to unraid Win10VM Desktop


Bean

Recommended Posts

Hi everyone,

 

so I'm about to take the step into unRAID and would like to get some confirmation about my approach. I'm new to unRAID so please bear with me please.

 

What I've got:

System: i7-5820K , EVGA 980Ti, 16GB Ram, 400GBIntel 750 PCIe nvme, 2x500GB 850 EVO SSD (Raid 0) on a Asus X99-S

Storage: Synology 1512+ with 5x3TB

Spares: GTX 560, WD Red 3TB HDD, 500GB 850 EVO SSD

 

So my plan is having a Win10VM with passthrough of GPU and entire nvme drive (as 'system drive');

combine the 3 SDDs into a cache pool with some of it allocated to the WIN10VM for big apps (games mostly);

an array currently with a single 3TB HDD, once I get it all running I'll take the whole build a step further with more drives;

I'd also like to have a Linux Desktop but not sure if I should make it a VM running off the cache pool (using the spare GPU) or have it as a docker.

 

Now to my questions:

1. Is my plan feasible, is there a another ' best practice' way to go about it?

2. would I be better of having the nvme and SSDs in the same pool - performance wise?

3. regarding the Linux Desktop, any drawbacks of having it as a docker and RDP into it compared to a VM (won't be gaming on that but the occasional youtube or videostream (1080p max) would be used)?

4. cpu assignment: 2+HTs for unraid and docker/linux VM and 4+HTs for the Win10VM sound good, or should I dedicate some to the linux VM exclusively (won't be in use too much)

4. anything I have to watch out for with my hardware?

5. what ever I forgot to ask :)

 

Thanks for reading and your input, I'm looking forward to taking the step to unRAID and will build a dedicated unRAID system eventually but for now I'd like to use what I have around and learn it's ins and outs.

 

PS: I'm no coder but can follow instructions very well.

 

 

 

Link to comment

You posted in the right section, but from my viewpoint, you have already made a decision on exactly how you are doing things, and if you have enough knowledge about unraid to make that decision, you probably are savvy enough to get it working.

 

There are too many of your questions that are complex topics in themselves, probably better asked as a separate post instead of all together. Your primary objective, virtualizing a current 10 machine rather than installing from scratch, is complex enough. It may work, it may not, too many variables to say for sure.

 

At this point, I think you need to just jump in and get started, tackle one aspect at a time, and ask for help if you can't find the answer by searching the forums.

 

Are you planning on testing your 10 backup, or keeping the original drive intact so you can step back if something doesn't go as planned?

 

Your first question, is it feasible, is a big ask, and largely the answer is, depends on you and your technical ability. You are stepping outside the norm, but not by a large margin.

Link to comment

Couple of things around the NVME drive....

 

if you're passing through the entire NVME, you'll need to use a newer version of the OVMF BIOS which isn't currently bundled with unraid if you want to use it as a boot device. Also, depending on your motherboard, you might not be getting the full x4 PCIe lanes assigned to your NVME drive. so maybe consider a PCIe add-on if you're not happy with performance or you cant pass it through because of IOMMU group issues. Theyre pretty cheap: http://amzn.to/2gHROoj

 

I do it like this:

Stub the device in the syslinux configuration, download ovmf firmware from here https://www.kraxel.org/repos/jenkins/edk2/, stick it on your cache drive and reference it in your XML rather than using the default values:

 

<os>
    <type arch='x86_64' machine='pc-q35-2.7'>hvm</type>
    <loader readonly='yes' type='pflash'>/mnt/user/VMData/Windows10VM/OVMF-pure-efi.fd</loader>
    <nvram>/mnt/user/VMData/Windows10VM/OVMF_VARS-pure-efi.fd</nvram>
    <boot dev='hd'/>
  </os>

 

So my BIOS lives in /mnt/user/VMData/Windows10VM/

 

PCIe passthrough for my NVME is pretty standard:

IOMMU group 41
81:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller [144d:a802] (rev 01)

<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev3'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>
    </hostdev>

 

Also post install, install the correct driver for your NVME device. Windows10 attempted to update mine using the 'Windows Surface NVME controller' driver and it cripples the performance. Not sure if its related to my specific NVME device or if it doesnt play nice with VMs... either way, its bad.

 

Have fun!

 

PS, remember that editing your VM in the GUI after making the BIOS changes in the XML will wipe the changes out and mean you cant boot your VM...

Link to comment

@jonathanm: my plan is just that, a plan and subject to change if advised by more experienced people. I have no experience with unraid what so ever.

the backup although a complete one, was intended more towards some data that I don't have on my storage unit.

What I take from your post is I'll have an easier time doing a fresh install (reinstalling some of my software will be time consuming but nothing to worry about).

 

@billington.mark: thanks for your input, I have an Intel 750 PCIe version so should be good on that part. I'm hoping to loose as little performance as possible so from what I understood a direct passthrough is the way to go (rather than having it as cache drive)?

 

I take it mixing regular SSD and the PCIe NVMe together in cache won't work very well? I'm not exactly sure how the cache pool works and allocates data.

 

Will need to wait for the weekend as I'm too busy with work - hope to get some more input  untill then :)

 

Sent from my SM-T800 using Tapatalk

 

 

Link to comment

3. regarding the Linux Desktop, any drawbacks of having it as a docker and RDP into it compared to a VM (won't be gaming on that but the occasional youtube or videostream (1080p max) would be used)?

 

Install as a VM. You don't have to assign a GPU to it if you don't want to. But you can assign the same GPU you use for windows to it if you wanted (but obviously not run them both at the same time then). You can even setup 2 XML files for the same Linux VM (pointing to the same vdisk) on with passthrough and the other without. That way you can have best of both.

Link to comment

3. regarding the Linux Desktop, any drawbacks of having it as a docker and RDP into it compared to a VM (won't be gaming on that but the occasional youtube or videostream (1080p max) would be used)?

 

Install as a VM. You don't have to assign a GPU to it if you don't want to. But you can assign the same GPU you use for windows to it if you wanted (but obviously not run them both at the same time then). You can even setup 2 XML files for the same Linux VM (pointing to the same vdisk) on with passthrough and the other without. That way you can have best of both.

that's great news - any pointers how that works with assigning the same GPU to two VMs? I couldn't find anythinging in the documentation and serial use works perfectly for me.

 

Sent from my SM-T800 using Tapatalk

 

 

Link to comment

I am not sure really why people want to have large cache pools. I think it better to have one SSD as a cache for speeding up disk writes and storing the docker image. Then to pass through another ssd for the domain share for storing the vdisk images. But that's just my personal preference though.

My current system doesn't have a single HDD, so I got used to the speed and don't want to trade off too much of it. Also I already have the NVMe and 3 SSDs so it would be a shame to let it go to waste.

 

If I understand you right what you are advising, translated to my gear be the NVMe as a passthrough (is this equal to unassigned disks?) and the SSDs as a btrfs raid 1 cache?

 

Also would it be possible to use the NVMe similar to the GPU with both VMs?

 

Sent from my SM-T800 using Tapatalk

 

Link to comment

If you stub the PCIe, it wont show in unassigned devices (which is what you want, if you want to pass through the entire device).

I wouldnt bother with the cache pool. If i undetstand it correctly, it'll run at the speed of the slowest drive, and when you're mixing SSDs and NVME, thats going to be quite the difference!

 

Personally I have 1 SSD for my cache, and a couple of SSDs outside of the array for use with a few different VMs, then my NVME passed through to my main workstation VM.

 

 

Link to comment

Thanks for the inputs. Now of it only were Friday alread I could give this a go. Already prepared the USB stick.

 

Any specific drivers I should downloadbaheadbof time? I read somewhere I should download out a file from my GPU to perform well when passed through.

Link to comment

I am not sure really why people want to have large cache pools. I think it better to have one SSD as a cache for speeding up disk writes and storing the docker image. Then to pass through another ssd for the domain share for storing the vdisk images. But that's just my personal preference though.

 

I use mine for vm storage, docker image, and for creating a larger ssd cache "drive" that I can use for video editing. FCP X files of mine tend to run 30-300GB. So it's nice to string together a couple smaller, cheaper ssd drives into a larger one. But my case is a very specific need.

Link to comment

Thanks for the inputs. Now of it only were Friday alread I could give this a go. Already prepared the USB stick.

 

Any specific drivers I should downloadbaheadbof time? I read somewhere I should download out a file from my GPU to perform well when passed through.

 

if you're using nvidia, you should be good out of the box. if you do need to go down the path of specifying a ROM as part of your XML, id create a bootable usb where you can do this from the command line, then copy it to your sever, add it to the xml later down the line.

I pass through an NVIDIA GPU to my VM fine without any noticable performance hit

Link to comment

I am not sure really why people want to have large cache pools. I think it better to have one SSD as a cache for speeding up disk writes and storing the docker image. Then to pass through another ssd for the domain share for storing the vdisk images. But that's just my personal preference though.

 

If you want to avoid using single disks the only way to achieve redundancy is to use a cache pool.

 

Link to comment

I am not sure really why people want to have large cache pools. I think it better to have one SSD as a cache for speeding up disk writes and storing the docker image. Then to pass through another ssd for the domain share for storing the vdisk images. But that's just my personal preference though.

 

If you want to avoid using single disks the only way to achieve redundancy is to use a cache pool.

 

Yeah, that's a good point. I have actually thought of using a raid 0 cache pool in the past rather than a raid 1. Just to get higher IOPS, but have been hesitant to use btrfs as a file system. I am sure it's mature enough now, but can't help worry a bit to switch my cache from xfs. What do you think of btrfs?

Link to comment

So I took the plunge and got the array running but still strugle with stub/passthrough of the nvme pcie and GPU. Looks like I don't have a choice but to install my old GPU temporarily for setting things up. This is going to be a potential pita because my main GPU is watercooled in a hardpipe system...

 

Sent from my SM-T800 using Tapatalk

 

 

If I stub/passthrouhg the intel 750 pcie nvme to the windows VM, how do I set the size on VM creation (I'd want to use it all)

Link to comment

Well I mostly have too issues. After spending the whole weekend to rebuild my system so I could put my old 560Ti in the first slot so I can passthrough my 980Ti the Linux VM I made still returns a black screen (and oddly enough will only shut down if I make a forced shutdown) and I have no clue how to even start the passtrhough of the pcie nvme.

I done so much reading over the weekend my head hurts and I'm none the wiser. I always considered myself to be a decent heavy user but unraid at least for now seems to be beyond me, at least the VM part.

Think I need an unraid for dummies step by step guide. Anyway wont have time today to do anything.

 

Sent from my SM-T800 using Tapatalk

 

 

EDIT: I'll post a proper report of what I achieved and what failed with logs later...

Link to comment

thanks for the input, as edited will add a thorough report (won't be today). The 560Ti is solely for unraid as I doesn't seem to be possible to run just the 980Ti with no other GPU for unraid (at least with nvidia - seems I head the unraid concept of headless a bit wrong) - no need to stub that one and not a big issue for now. It would be neat if it was possible down the road but let's get things running properly first.

 

Sent from my SM-T800 using Tapatalk

 

 

Link to comment

So today I started all fresh and it works - only thing I did different was wipeing the 750 and realized that my 980Ti was still connected to my wifes monitor which Windows defaulted too, hence the black screen  ::) ; functioning Intel 750 PCIe passthrough to Win10VM as well as the GTX980Ti.

 

I'm still in the process of reinstalling most of my software but so far so good.

Before I started this whole endeavour I did a DiskMark test so here are the results:

Bare Metal (Intel 750 PCIe left; Samsung 850 EVO Raid 0 right)

thumb_IMG_20161217_170116.jpg  thumb_IMG_20161217_170100.jpg

UnRAID VM Win10 (Intel 750 PCIe left; Samsung 850 EVO Raid 1 cache pool right)

thumb_IMG_20161217_170042.jpg  thumb_IMG_20161217_170012.jpg

 

4K performance suffered quite a bit - any suggestions/options to improve there, if possible at all?

the EVOs performance loss is mainly due to not being in Raid 0 anymore so that's fine for now. Is it possible to run the cache pool in Raid 10 instead of 0? I guess that would bring the numbers pack up.

 

I'll start using the PC as usual for the rest of the month and see how I like it (I doubt I'll feel the peformance decrease much in real life but we'll see). If all goes well I'll get a license or two and keep expanding :)

 

Thanks for all the help

 

EDIT: I had a short snag where my Win10 install would boot loop as it always defaulted to the install disk. I managed to add the NVMe drive through the virtual BIOS and all whent well after that.

Link to comment
  • 3 weeks later...

so after a few weeks of running the system I reverted to my bare metal setup. While the Linux VM performance was working effortlessly the Win10 VM was sketchy at best. Sometimes it would run smoothly and then bog down every once in a while. Restarts where unpredicatble, sometimes more than 15 min same for logging out. I sure like some aspect of unraid and will build a dedicated system in the near future but won't be running a Windows VM on it.

Last but not least I'm looking forward to improvements on unraid regarding my future system and hope there's going to be some development to suppport all flash setups with measure in place to keep it reliable (something similar to Synology F1 or so)

 

Thanks for all your help and I'll be returning once I tackle the dedicated build, still contemplaiting picking up a used HP ProLiant system vs a build from scratch

 

Sent from my SM-T800 using Tapatalk

 

Link to comment

 

 

Thanks for all your help and I'll be returning once I tackle the dedicated build, still contemplaiting picking up a used HP ProLiant system vs a build from scratch

 

Sent from my SM-T800 using Tapatalk

 

I have 4 DL380 G6 machines. There are plusses and minuses.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.