Warrentheo

Members
  • Posts

    311
  • Joined

  • Last visited

Everything posted by Warrentheo

  1. https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG * Mon Feb 04 2019 Cole Robinson <[email protected]> - 0.1.164-1 - Update to virtio-win-prewhql-0.1-164 - Update to qemu-ga-win-100.0.0.0-3.el7ev - Add win10 arm64 experimental drivers Driver is still considered "Latest", since the "Stable" branch is still on 0.1.141-1 https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso
  2. While I do get the impression that you do know what you are doing for the most part, there are a couple of things that pop out of that last statement... They do indeed make things like USB-C connectors on USB cables that only have enough wires to support USB2 connection speeds, if you have a USB 3.x port connected to a USB 3.x wire, the port shape becomes irrelevant... USB 3.x ports are fully backwards compatible, so the whole USB2 issue also becomes irrelevant... I have seen in the early days of USB 3.0 some USB 2.0 devices that had issues, but those were due to the crappy drivers on the original cards, and is basically a hardware issue or a driver issue with those specific ports... I know this will sound like a put down, like recommending a "For Dummies" book to someone sounds like a put down, but I swear, this really helped me figure this stuff out since they made it surprisingly more complicated than it should have been with the bad naming that the USB spec has gone through over the years, I would recommend reading the Wiki page on the USB spec here: https://en.wikipedia.org/wiki/USB Also this is a good read to see the kinds of issues you will face when you have issues like this: https://gizmodo.com/a-google-engineer-is-publicly-shaming-crappy-usb-c-cabl-1742719818 Beyond those it will just be trial and error to nail down exactly where the issue is, sometimes it is something wonky like one particular cable, but only when using a specific driver, or something equally mysterious... But I think you have a good start on figuring out your particular setup issues...
  3. That does indeed install the driver in windows, but you also need those drivers outside of windows, before the boot procedure knows how to get to those drivers... I have only gotten that to work with a startup repair... It is the reason the VirtIO driver disk has a floppy image in the root of the iso (\virtio-win-0.1.160_amd64.vfd)... Back in the days of Windows XP, it was the only way to affect the boot drivers, at least in Windows 10 it lets you do it without digging out and plugging in your floppy drive...
  4. Some motherboards have multiple USB controllers, for instance mine has: IOMMU group 3: [8086:a2af] 00:14.0 USB controller: Intel Corporation 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller IOMMU group 15: [1b21:2142] 05:00.0 USB controller: ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller IOMMU group 16: [1b21:2142] 06:00.0 USB controller: ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller And while for me, none of them can be passed through due to issues with my motherboard choice, yours may have multiple options, and one may be pass-through-able... Experiment with booting UnRaid on a different controller, and passing through the full controller known to be working... Beyond that, I recommend getting a USB controller card and passing it through, or replacing the DAC with one that works on the ports you have...
  5. This is the same as moving the boot drive from one computer to another, but this is really not an UnRaid issue, but rather just a "Windows Is Stupid About Drivers" issue... It can be done, but the the ideal solution is to use the correct drivers during Windows install... I assume that is not an option here though... If changing an existing install, make sure to backup the existing VM image file and XML file... An overview of what is needed: 1. Boot the Windows ISO with the VM image file in question attached... 2. Tell it how to talk to the VM's image files... Either loading the SCSI driver (vioscsi/win10/amd64/) or the SATA/VirtIO driver (viostor/win10/amd64/) from the Qemu Drivers disk) For some reason, this can only be done by getting most of the way through the normal install process, but when it gets to the screen asking which drive to install to, it will be blank... This screen is the only one that has a "Load Driver" option hidden in the bottom left corner... While this was intended just to load storage drivers, you actually can install nearly any driver from here, as long as you load the storage driver last... You will know it worked when you can now see the image file partitions, and windows offers to install to one of them... Now this boot of the windows ISO knows how to talk to your VM's image file... (p.s. I prefer the SCSI driver, since it is the only one that can enable Windows TRIM/Linux discard command to make it all the way down to a host SSD drive) 3. Click the "Red X" for the install, and tell it yes you want to cancel the install... 4. Go through the normal "Repair" to do a "Startup Repair"... Process for this is as follows: Boot off of the latest Windows 10 install ISO, and chose these options: 1. Click through all the options like a normal install, the options you pick here don't matter since we are not actually doing a Windows install... ...(Other setup windows will appear, click through till you get to the next one)... ...(Make sure to do a "Custom" install)... ...(We are finally where we can tell the Windows Install process about our storage drivers, click "Load Drivers")... ...(Load your driver)... ...(Click Next)... ...(This is the kind of thing we are looking for, Windows now knows how to talk to our drive 👍)... ...(Now kill the setup with the "Red X" 😣)... ...(Now that windows can even see our device, we can finally do a "Repair your computer")... ...(Chose "Troubleshoot")... ...(Then "Startup Repair")... ...(This will rebuild the Windows BCD files, hopefully this works for you)... Unfortunately I have had a bunch of issues with this kind of thing over the years, this only works for me about 70% of the time... As I said above, the prefered method is to just set it correctly during Windows install... Beyond that, you have to do your own research into all the other ways and reasons Windows might not like you...
  6. I also added pci=noaer to mine, but didn't know if it is normal for most people... Thought it was mostly my motherboard acting up... For the record, you should not do this if there is any way not to, it is like fixing the wrench light in your car by cutting the wires to the light... Not the ideal solution, but I am still trying to fix the root issue that causes this issue for me... That said, I have been running with it on for several months now with no issue...
  7. I use a GTX 1070 in Win10 Pro x64 VM, and I am not sure that the driver hacking option is really needed... I just used the SpaceInvader command to dump the ROM off my card, set it in the VM's XML file, and have had only minor issues since then (when installing drivers through GeForce Experience, halfway through the install the screen does some scary looking "snow" effects which makes the screen nearly unreadable usually, but works fine after it completes the install and I click the close button on the install then reboot the VM...) First question is to make sure that the card works just fine when installed bare-metal (outside of UnRaid)... If that is not the issue, make sure you are booting with UEFI (disable the "Compatibility Support Module" or CSM in your BIOS to make sure the old boot modes are disabled) Then set UnRaid to boot with the card set to use the VFIO-PCI driver for your card... append vfio-pci.ids=10de:1b81,10de:10f0 initrd=/bzroot (Make sure to change it for all the ID's on your card, including the audio devices) This will prevent UnRaid from trying to steal the card for itself during boot... Make sure the VM is set to use the OVMF bios, not the older SeaBios... This enables UEFI boot also in the VM, and GPU passthrough needs UEFI boot both host and VM to work correctly usually... Make sure you are using the Q35 machine type (UnRaid defaults to the i440fx for some reason, which is the chipset from the ancient Pentium 1+2 that never heard of a PCIe slot...) Use a good ROM for the card (preferably one from the card itself)... It is possible to download them from somewhere else, but there could be a version mismatch between the one you download and the one on the card itself, causing problems... There are also other possible issues with not getting the one from the card itself (file being twice as big as it is supposed to be, or filled with garbage and other issues) I don't think that driver manipulation should really be needed, since if you are getting to the point where you need to do that, it means that you wern't successful in hiding the VM from the GeForce Driver install, and need to check the steps above... Once you get into windows itself and get the drivers to install with no error, there is a tool to help with the "message signaled interrupt" or MSI setting for the card, which is a much faster way of talking to the card than having to put everything through the CPU... The attached tool will help with audio issues and other things with the card, and needs to be run after every driver install or upgrade... MSI_util.exe
  8. While you can manually sparse-ify these files with this command: fallocate -d <filenamegoeshere> which lets you "Dig" all the zeros out of a file... A better option would be to change the driver KVM/Qemu uses to be the SCSI driver, then set the the "Discard" option for the drive manually in the VM XML like this: <disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/mnt/user/domains/<imagefilenamegoeshere>'/> <target dev='sda' bus='scsi'/> </disk> This lets the VM's TRIM command get passed to the host, and therefor make it all the way down to the real drive... This lets KVM/Qemu automatically manage the image file sparseness for you... Obviously much more useful if you are storing it on an SSD, since constantly changing the VM's image file size would murder speed on an HDD... If you are running on a HDD, it is recommended that you leave these files fully allocated so that things like defrag continue to work correctly... Edit: changing the driver that Windows uses to talk to the boot drive can be problematic, and is best set during windows install... It is possible to change afterward, but make sure to do backups of both the image file, and the VM XML before you try it...
  9. The easiest way to guarantee solve this, passthrough a full USB controller card... Solve all kinds of weird devices not being fully passed through... Some motherboard USB cards can be passed, but none of the 3 on my Asus board could be, so your mileage may vary...
  10. Have you watched the video by SpaceInvaderOne on nVidia passthrough?
  11. If the device is known to be successfully passed through with any other VM, and you know that the new VM has the same passthrough settings, then it becomes a config issue for the new VM... Only thing I can say is make sure that is setup the same was as the one known to be working... What I can say is that a lot of motherboards have minor issues with trying to passthrough the onboard audio... And while it is possible, I just passed through a USB card, and use a USB soundcard with it... That just avoids most of the minor issues with trying to use the onboard audio... For me that was easy, since I like aftermarket headphones anyway, and it may work just fine with some tweaking... Up to you...
  12. If you are copying over the VM image file, it should just be like any other file... Move it over, then edit the VM settings either with the WebGUI or the Advanced View XML version... Just point it to the new location...
  13. I don't see any smoking gun on my first look, some things to try just to see: Try setting VM to OVMF with Q35 machine type... The default i440fx doesn't know about PCIe slots, and while it can translate everything down to PCI, it is not ideal... OVMF is almost required for video card passthrough... Try making sure "Compatibility Support Module" or CSM is disabled in your motherboard BIOS... This will ensure that Unraid is booting in UEFI, and removes legacy BIOS issues from being a possibility... Make sure that the AMD-Vi ( not just AMD-V ) is also enabled in your BIOS... Most likely it is, but double check... Try cleaning up your your syslinux config, you have several devices that may be in the same IOMMU group as motherboard critical devices... Try just disabling only devices absolutely needed to boot, for instance, I don't VFIO my passed through USB controller since all I have plugged into it is keyboards and such, no storage... This has the benefit of letting me use those devices on the host when the VM is not booted... On my Asus board, my sound card is linked to smbus controllers, and so causes problems when I try and pass it through... See if removing all the manual edits and creating a VM with just VNC and see if that works... If so just slowly add stuff back in until you find the one giving you trouble... I give a 50%/50% shot that one of those helps you... Beyond that, you will probably need to know stuff about AMD boards that I don't know myself...
  14. need diagnostics file and VM XML files...
  15. There is a solution to your i440FX vs Q35 issue, as well as your not wanting to make manual edits to your VM's XML... Just create a "New" VM but point all its storage back to the current image files... This is the VM equivalent of moving the hard drive to a new system... There are two ways to do this in Unraid, they both have their uses: Version 1: Create a "New" VM, this will generate a new fake BIOS entry for that VM, and disconnect several other things in the VM from the previous VM... You can then re-setup everything with a different machine type or whatever settings you wish... You can still use the existing VM's image files, just make sure if you delete the entry for the original VM, not to accidentally delete the image files as well... If you do this, I recommend taking manual control of the image files storage location, as well as backup the image files just in case... Version 2: Again, create a "New" VM, but just manually copy over all the data from the original VM as a manual edit of the new VM... You can then make minor changes like having 2 or 3 VM entries that are nearly identical, but have different RAM settings, allowing you to pick your RAM by just picking the VM entry that suits you... When moving from i440fx to Q35, make sure to do a backup... This is the same as moving the hard drive in a computer to a new computer... It will take forever to boot the first time since every single device driver on the system will need to be reinstalled... My own experience has shown this to not be a major issue, but that is not an excuse to skip the backups... All this adds up to not being much of an issue for me doing manual edits to the XML... UnRaid has gotten much better at not throwing away all your edits when making a change in current versions... I do manually edit nearly all my VM's with the following info (again, edit as needed for your system): <disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/mnt/user/domains/....'/> <target dev='sda' bus='scsi'/> </disk> <cputune> ... ...  <emulatorpin cpuset='0'/> </cputune> <cpu mode='host-passthrough' check='none'> ... </cpu> <os> ... <smbios mode='host'/> </os>
  16. @bastl I did something similar, but created a second raw storage image file on the cache, and just setting the steam storage to have a location there, then just using the steam game move feature to allow me to just move the game there when It becomes a game I play for often, then archive it back to the Unraid host when I don't... I found this to be by far the fastest loading times, but does mean that only one VM can have access to the data without a bunch of move options... There are also some games that hate being run off of a UNC network share (\\servername\...) which steam does allow you to do... This resolved that issue as well...
  17. Host Storage: 2 x NVMe Samsung 960 EVO in Unraid Raid-0 with btrfs set as cache drive without a lot of Dockers or other services that use the drive much VM Storage Settings: Raw image file, set to use the SCSI driver in the VM, with the Discard="UnMap" option set (Trim goes all the way from VM to the real drives...) ----------------------------------------------------------------------- CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo Crystal Dew World : https://crystalmark.info/ ----------------------------------------------------------------------- * MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s] * KB = 1000 bytes, KiB = 1024 bytes Sequential Read (Q= 32,T= 1) : 3878.403 MB/s Sequential Write (Q= 32,T= 1) : 2775.437 MB/s Random Read 4KiB (Q= 32,T= 1) : 199.059 MB/s [ 48598.4 IOPS] Random Write 4KiB (Q= 32,T= 1) : 150.607 MB/s [ 36769.3 IOPS] Random Read 4KiB (Q= 1,T= 1) : 50.087 MB/s [ 12228.3 IOPS] Random Write 4KiB (Q= 1,T= 1) : 40.228 MB/s [ 9821.3 IOPS] Test : 1024 MiB [C: 52.0% (156.0/299.7 GiB)] (x5) [Interval=5 sec] Date : 2019/02/03 7:23:40 OS : Windows 10 Professional [10.0 Build 17763] (x64) For reference, the official specs for a single drive running bare metal on a good system: Sequential Read Speed Max 3,200 MB/sec Sequential Write Speed 500 GB: Up to 1,800 MB/sec RANDOM READ (4KB, QD32) 500 GB: Up to 330,000 IOPS (Thread 4) RANDOM WRITE (4KB, QD32) 500 GB: Up to 330,000 IOPS (Thread 4) RANDOM READ (4KB, QD1) Up to 14,000 IOPS (Thread 1) So definitely not a straight up 2x performance boost from the Raid-0... Some performance is given up on my system due to having to request storage from the host on every write, and release that storage back to the host on every delete... For me however I originally had several issues with the drive filling up and killing the performance of my VM's, this was all eliminated by switching to the SCSI driver with Discard (TRIM) enabled, as this made the storage self regulating... Most of the real limiting factor is trying to max out two by 4x PCIe lanes for each drive through the PCH then to the CPU... This maxes out the connection between the PCH and CPU on my system, and so is the real limiting factor...
  18. The Q35 should have nothing to do with video limitations unless you are using a virtualized (Fake) graphics card or VNC... I passthrough an nVidia Geforce 1070, and have a dual G-Sync monitor setup connected to it, running at 144hz...
  19. What I meant by that was no baremetal, run off of raw img files (again QCOW2 kills performance, avoid like the plague)... There really is no reason to run baremetal, since leaving it as Unraid cache lets all read/write performance to the host to be much faster usually... There will be minor times where this could be less than ideal if you are running something along the lines of a 300 thread BitTorrent share off the same drive or something else that constantly thrashes the drive... This will kill the VM with micro-stutters and other issues... And because I don't passthrough, docker containers can share it also just fine... It all really depends on the situation, but if you don't have any one thing that wants to max out the drive, it is best to share its performance with as many things as possible... You can have hundreds of bad blocking file ops on an SSD drive before you start competing with the time you have to wait for one normal read on a platter based setup... Hundreds of thousands if you have a good NVME with command queuing in Raid-0... Spread the love around as best you can...
  20. We probably need to split these into their own threads, they are unlikely to be related, and we don't need to be reviving a zombie thread from 2 years ago... Post new ones with your diagnostics files, and XML files so we can take a peak...
  21. I have a 7700K, 64GB, 2xNVME in Raid-0 cache drive, GTX1070 passed through to Win10... I run the VM's off the Unraid Cache... Make sure the VM is set to Q35-3.1.0 in the Unraid 6.7.0-rc2 or Q35-3.0.0 if you want the Unraid 6.6.6... I had several issues when using the Unraid default of i440fx, some are performance related, some compatibility issues... This is because the i440fx was an ancient chipset for the old Pentium 1+2 that knew about old PCI and AGP slots... and so has to translate everything PCIe into PCI, and other translation overhead... The Q35 chipset actually knows about PCIe, and so skips all this... Not as major as that sounds, it is still fairly fast, but it is noticeable, and as I said, it had compatibility issues for me, especially with UWP apps... Also minor, but I set my VM "Drives" to use the SCSI driver, this allows a "Discard" setting which is the Linux version of SSD TRIM... <disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/mnt/user/domains/.................... ........ This is a minor speed up, and on some file operations actually slows things down, but allows Windows to pass the TRIM commands all the way down to the actual SSD you are running on... This also shrinks VM image files, which for me actually speeds things up quite a bit... Full SSD's start slowing down alot, and so this is mainly a slowdown prevention measure... Edit: Also avoid QCOW2 at all costs... Probably the main thing is knowing how your processor works, and making sure you pass CPU cores in a way that makes sense for your CPU... Let Unraid as well as the overhead from running KVM/Qemu run on its own core (Core 0)... Don't do something silly like passthrough only the hyper-threading cores and all the main cores to a different VM or something else like that... Edit: I don't have one, but I understand this is especially important with current AMD chips, because they have NUMA nodes as well to make sure they get right... Some other manual edits I made to mine: <cputune> ... ... <emulatorpin cpuset='0'/> </cputune> <cpu mode='host-passthrough' check='none'> ... </cpu> <os> ... <smbios mode='host'/> </os> These make sure that KVM/Qemu share the primary core with the rest of Unraid, tell windows about what your CPU can do (potentially some good improvements here) and the info from your Motherboard BIOS and not the fake info from KVM/Qemu... That is where I am at with my personal training on the subject...
  22. The DHCP timeout setting is sent from your router, has very little to do with the VM...
  23. I don't have an AMD card, but the way I understand it, nVidia started encrypting the connection between their BIOS and their Drivers, and so are the only one that needed the workaround of telling KVM about the video BIOS... Are you sure you even need a BIOS?
  24. I suspect that if it is not the issue, it is at least a symptom of the issue... I noticed that on your system you have NVMe drives, which I also have... My current best guess for these errors are something in the linux kernel, and the BTFS drivers when introduced to NVMe drives... But that is guesswork from someone who barely knows what I am doing...