Warrentheo Posted February 19, 2018 Posted February 19, 2018 I am running a Windows 10 Pro Gaming VM with nVidia pass-through Guest on top of an unRaid 6.4.1 drive pool, and have got most of the kinks worked out of the system... I have a Steam library that is approaching 2.25 TB, and have it working just fine in the Win10 VM as a SMB share... However I have several applications that refuse to deal with games being installed a windows network share instead of local storage (the profile software for my gaming controller, as well as the "ShadowPlay" gaming sharing functionality for my nVidia cards)... I have come to the conclusion that I need to change the way windows sees the pool to being a local block device somehow (So I can give full "NTFS Like" functionality to the guest VM)... I have researched KVM/QEMU to see what can be done with allowing some sort of storage pass-through, and while this looks like it would be fairly trivial if I was running a Linux-Like guest of some sort, it appears from my research to just not be possible with a Windows guest without converting to exposing the array as a Raw Block device/files in some way... So now I am trying to research the best way to do that... Ideal solution needs to be able to: 1. Show up as local storage in the Windows VM... (I have already tried to change to "Client for NFS" in Windows, didn't help... As far as I can tell no network protocol of any sort is going to work) 2. Allow for easy migration between VM's, preferably allowing some sort of sharing between multiple VM's... But simultaneous access is not a requirement... 3. Allow for mounting outside of a VM if needed... (While this would mostly be for my games library, this solution would also be used for things like local copies of OneDrive/GoogleDrive storage and backup) iSCSI doesn't appear to be the ideal solution, since it requires something like an OpenFiler VM to be setup and launched before the Windows VM, and would screw up the Windows VM if it was ever shutdown before the Client VM's... Not to mention the One-Client-At-A-Time nature of iSCSI... I am currently thinking it will come down to setting a LVM Storage Pool and exposing it to the QEMU VM in some fashion with the setup found here: https://libvirt.org/storage.html#StorageBackendLogical By createing a number of maybe 100GB Raw Files, and exposing them to the Windows Guest as one giant volume, having QEMU running the LVM side for the VM... But that still limits it down to one quest at a time... Plus it sounds like it would not be easy/trivial to adjust storage sizes on such a pool... I am open to suggestions to a better solution, I can't be the only one who has needed a local block device in a Windows VM before, but my Google Fu is coming up short on this issue...
1812 Posted February 19, 2018 Posted February 19, 2018 just give it another .img file allocated with the storage space you need setup as another disk and done.
Warrentheo Posted February 20, 2018 Author Posted February 20, 2018 21 hours ago, 1812 said: just give it another .img file allocated with the storage space you need setup as another disk and done. ah, the problem is I need to maintain the ability to split the vdisk between multiple physical drives, but still expose it as a single mount to the VM...
1812 Posted February 20, 2018 Posted February 20, 2018 10 minutes ago, Warrentheo said: ah, the problem is I need to maintain the ability to split the vdisk between multiple physical drives, but still expose it as a single mount to the VM... you could pass through the individual drives to windows, and then use windows as a raid manager for the volume. you could use them all in a giant cache pool to store the larger needed image or buy a larger single disk. I've asked for multiple cache pools before, which might suite a need like yours. if you think so, add your support here: https://lime-technology.com/forums/topic/54853-double-or-triple-quotcachequot-pools/
Warrentheo Posted February 21, 2018 Author Posted February 21, 2018 After much experimentation I have found a buggy, but functional solution... Install ProFTP plugin, get it working, install NetDrive3 for Windows, set it up, and have it add a local drive letter for an FTP connection... This seams to be working without having to convert 2.25TB of data to a block device file and attach it to only one system at a time... While it works, it sometimes gives i/o errors for no reason... Probably need to tune things on the FTP server side... Also wish I didn't have to have a translation program for a translation program to get this to work
Recommended Posts
Archived
This topic is now archived and is closed to further replies.