richardsim7 Posted September 5, 2017 Share Posted September 5, 2017 I have the same issue, didn't realise isolating CPUs could be the cause of the issue... Quote Link to comment
Tyranian Posted September 5, 2017 Share Posted September 5, 2017 I noticed as I only tried to isolate the CPU's after I had already built the machine and as soon as I started it I got the error once I removed the isolation I could boot into windows again, Quote Link to comment
richardsim7 Posted September 5, 2017 Share Posted September 5, 2017 Ah ok, interesting So yeah, would be interesting to see if we can get this working natively instead Like you say, @billington.mark's instructions are for starting from scratch Quote Link to comment
Tyranian Posted September 5, 2017 Share Posted September 5, 2017 I did try last night to use the OVMF and was able to create a machine and once started see the NVMe in the bios options and set it to be the boot device but unfortunately, it hung on boot and I couldn't get windows to start again, I have a horrible feeling I might have to rebuild the VM from scratch or backup create a new VM and restore onto it.. but lack the time atm to give that a go. 1 Quote Link to comment
billington.mark Posted September 13, 2017 Share Posted September 13, 2017 On 9/5/2017 at 10:34 PM, Tyranian said: I did try last night to use the OVMF and was able to create a machine and once started see the NVMe in the bios options and set it to be the boot device but unfortunately, it hung on boot and I couldn't get windows to start again, I have a horrible feeling I might have to rebuild the VM from scratch or backup create a new VM and restore onto it.. but lack the time atm to give that a go. Try and boot from a windows ISO and run a startup repair. boot partitions on the NVMe disk will be different if you originally installed while the device was on the VIRTIO bus. If that doesnt work, you'll need to attach the nvme as a secondary disk on another vm to drag files off you need, then do a fresh install. Quote Link to comment
richardsim7 Posted September 24, 2017 Share Posted September 24, 2017 So I gave in and did a fresh install on the NVMe, very fast, very nice, love it I want to transfer files across from my old VM (vdisk) ideally mounting it with this VM and booting to the NVMe. However every time I try, it just boots into the vdisk. How do I access the boot menu, or mount the vdisk so it won't boot from it? Quote Link to comment
richardsim7 Posted September 24, 2017 Share Posted September 24, 2017 Nevermind, I used the method at the bottom of this thread: https://forums.lime-technology.com/topic/54953-solved-how-to-mount-vdiskimg-files/ Quote Link to comment
shrmn Posted November 12, 2017 Share Posted November 12, 2017 (edited) Thanks for your very helpful videos that convinced me to give UnRAID a shot after my NAS died and took 3 WD Red drives with it. So I've just started dabbling in UnRAID, and the main thing I needed was to have a gaming VM through an NVMe drive, running alongside the 6-drive array. (I basically took my gaming rig and converted it for UnRAID) I followed @billington.mark 's steps and used the @gridrunner's scripts to have the OVMF files copied over upon array start. Windows 10 (build 1709 / Fall Creators Update) installed really quickly. At this point I had forgotten to have the VirtIO drivers ISO mounted to get the drivers in, so I did, and the ethernet, serial and other drivers installed successfully. I also passed through an AMD RX580, through which I had been setting all these up. So rightfully, the next thing I did was to open Edge, and obtain the latest RX580 drivers. Installation went through smoothly, and I chose to restart the VM through the standard Start Menu. Boom. BSOD. Error said something about attempting to write into read-only memory. So it rebooted on its own and its now stuck at the Tianocore logo, with nothing else. I turned on my monitor connected to the host machine's integrated graphics and noticed that this keeps appearing randomly: /dev/nvme0n1: No such fle or directory This has appeared 13 times in the last hour of getting Windows running. Any idea what's causing this? EDIT: Maybe I should add that I configured it to start with 2GB of ram and allow it go up to 8GB (I have 32GB non-ECC on the host). Task Manager shows 7.3/8.0GB used right after startup. Is this normal? 4 out of 8 threads (2 cores) of the host's i7-6700 is assigned to the VM. Edited November 12, 2017 by shrmn Quote Link to comment
shrmn Posted November 15, 2017 Share Posted November 15, 2017 Update to the issues I faced above - re-created the entire VM this time stubbing the NVMe drive. The console flooding of "/dev/nvme0n1: No such file or directory" no longer happens. Also this time made the min and max memory settings fixed at 16GBs and the memory use is down to nominal 2GB levels at startup. Everything works fine with a steady 100+fps on Overwatch with the RX580, but! Sound suddenly drops out about 5-10 mins into any game or video (on YouTube). The only way to fix it is to restart the VM. I've tried an external USB-based Creative Sound Blaster HD, but the same issues persist. Quote Link to comment
Chamzamzoo Posted March 16, 2018 Share Posted March 16, 2018 I was wondering if you'd be able to run your Crystaldiskinfo test with the new Version 6 please Gridrunner? Any idea why the 4kQ32T test got slower? Quote Link to comment
steve1977 Posted May 13, 2018 Share Posted May 13, 2018 Great to see this working. I am running the latest stable Unraid with a Windows VM with OVMF bios. Do I still need clover and xml editing to get this to work? I'd prefer not to do as otherwise I'll need to manually edit the xml file every time I change the ram or CPU configuration. Thanks! Quote Link to comment
steve1977 Posted May 18, 2018 Share Posted May 18, 2018 Any thoughts? Thanks for all of your help! Quote Link to comment
steve1977 Posted May 19, 2018 Share Posted May 19, 2018 On 3/20/2017 at 7:29 PM, billington.mark said: You can boot to windows with a passed through NVMe without using clover at all. I've been doing this for a while since the first 6.3 RC. Bear in mind this assumes you're doing a fresh install. The only reason you cant do this natively is that Unraid ships with an older OVMF bootloader for VMs which is before it started supporting NVMe devices. We will just be downloading a referencing a different bootloader so we add support for direct NVMe booting. Let me be more specific with my question. Does Unraid 6.5.2 come with an updated OVMF bootloader that supports NVME devices? If so, does this allow me to passthrough a NVME drive without custom edits to the xml file? Quote Link to comment
SpaceInvaderOne Posted May 19, 2018 Author Share Posted May 19, 2018 (edited) Yes unRAID supports this now.You just need to passthrough the nvme controller for example IOMMU group 14: [1179:0115] 01:00.0 Non-Volatile memory controller: Toshiba America Info Systems XG4 NVMe SSD Controller (rev 01) You just need to take the id then add it to the syslinux config file label unRAID OS menu default kernel /bzimage append initrd=/bzroot vfio-pci.ids=1179:0115 Then just add the nvme controller in the vm template and remove and vdisks. Edited May 19, 2018 by gridrunner 1 Quote Link to comment
steve1977 Posted May 19, 2018 Share Posted May 19, 2018 Wow, fantastic. Thanks for your note. No fiddling with the XML needed anymore? Above wouldn't "stub" the NVME, so I could even run two VMs on the same NVME? And lastly, does your guide about cloning the vidsk to NVME still work or will this require a fresh windows vm install? Quote Link to comment
steve1977 Posted May 20, 2018 Share Posted May 20, 2018 I gave it a try. Haven't tried to clone the vdisk, but realized that my NVME no longer shows up as UD in the Unraid GUI after editing syslinux. Is that an issue or expected behavior? My plan is to split one NVME to use half space for a WIndows and half for a Mac VM. Quote Link to comment
SpaceInvaderOne Posted May 20, 2018 Author Share Posted May 20, 2018 22 minutes ago, steve1977 said: I gave it a try. Haven't tried to clone the vdisk, but realized that my NVME no longer shows up as UD in the Unraid GUI after editing syslinux. Is that an issue or expected behavior? My plan is to split one NVME to use half space for a WIndows and half for a Mac VM. Yes binding it to the vfio-pci driver in the syslinux config will isolate it from the host. This is expected behaviour. After you have added the nvme drive in the vm template you could remove the line from the syslinux config file. Reboot and it will be back in the UD. Then when you start the VM it will still be passed through. You will also see it listed as passed through in that original VM template if you click edit. However, if you go to create a new VM you will not see the device listed in other PCI devices to pass through. (because unRAID will only list items in this section of the template that are bound to the vfio-pci driver) Quote Link to comment
steve1977 Posted May 20, 2018 Share Posted May 20, 2018 Thanks, sounds brilliant and like the way to go. This should allow to serve my purpose to use half the space on my NVME to have a Win VM (native) and the other half to have a Mac VM (also native on another partition of the same NVME). Besides technical feasibility, would this make sense to pursue? Two more questions: 1) When using the tactic to add and after setting the VM up removing from syslinux, would this keep things working when I make changes to the VM configurations (e.g., adding more ram or changing the # of cores). By doing so, would I wipe out the info about the passthrough after hitting "update"? 2) Would cloning the disk to NVME still work. So, I'd just format in NTFS the NVME partition when running on vdisk. Then clone to NVME. Then remove the vdisk and all set? This reads too easy to be true? Quote Link to comment
david279 Posted May 21, 2018 Share Posted May 21, 2018 I would watch for clover doing something to your windows boot being on that same drive.Sent from my SM-G955U using Tapatalk Quote Link to comment
steve1977 Posted May 27, 2018 Share Posted May 27, 2018 Not sure I understand your point. Does it relate to my two questions? Thanks! Quote Link to comment
steve1977 Posted June 2, 2018 Share Posted June 2, 2018 On 5/21/2018 at 6:46 AM, steve1977 said: Thanks, sounds brilliant and like the way to go. This should allow to serve my purpose to use half the space on my NVME to have a Win VM (native) and the other half to have a Mac VM (also native on another partition of the same NVME). Besides technical feasibility, would this make sense to pursue? Two more questions: 1) When using the tactic to add and after setting the VM up removing from syslinux, would this keep things working when I make changes to the VM configurations (e.g., adding more ram or changing the # of cores). By doing so, would I wipe out the info about the passthrough after hitting "update"? 2) Would cloning the disk to NVME still work. So, I'd just format in NTFS the NVME partition when running on vdisk. Then clone to NVME. Then remove the vdisk and all set? This reads too easy to be true? Any thoughts on above? Quote Link to comment
steve1977 Posted June 3, 2018 Share Posted June 3, 2018 On 4/22/2017 at 1:23 PM, cman9090 said: Any confirmed working cases for OSX? Anyone tried? If I read this thread correctly, this should even work without Clover? My "dream" would be to have two partitions on my 1TB NVME drive. One for a Windows VM and one for a Macintosh VM. No more fiddling with Clover and likely the best performance. Anyone tried and can I run both in parallel? Quote Link to comment
wrotruck Posted June 12, 2018 Share Posted June 12, 2018 (edited) On 5/19/2018 at 9:23 AM, gridrunner said: Yes unRAID supports this now.You just need to passthrough the nvme controller for example IOMMU group 14: [1179:0115] 01:00.0 Non-Volatile memory controller: Toshiba America Info Systems XG4 NVMe SSD Controller (rev 01) You just need to take the id then add it to the syslinux config file label unRAID OS menu default kernel /bzimage append initrd=/bzroot vfio-pci.ids=1179:0115 Then just add the nvme controller in the vm template and remove and vdisks. I stubbed the NVMe controller and In the VM template I have the vdisks set to none, the NVMe device passes through without issue and I am able to load Windows 10. However every time I boot/re-boot the VM I have to use the virtual bios (OVMF) and specify to load from the NVMe drive. Everything else seems to work fine and while this doesn't break anything is it kind of annoying. Is there a way to set the NVMe device to be first in the VM's boot order? Edited June 12, 2018 by wrotruck VM Bios addition Quote Link to comment
BobPhoenix Posted June 12, 2018 Share Posted June 12, 2018 What does the XML look like for your NVMe device? What you are looking for is: <boot order='1'/> If that tag isn't on your NVMe device then remove it from the device it is on and add it to the NVMe device. Every time you edit the VM you may have repeat the process. Quote Link to comment
steve1977 Posted June 12, 2018 Share Posted June 12, 2018 I am planning to give it a try this weekend. It seems that stubbing the drive may be easier as first step? Can I stub two partitions of the same disk to two different VMs and run them in parallel (one Windows VM, one Mac VM)? If so, how to do and anyone tried with Mac? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.