billington.mark

Members
  • Posts

    362
  • Joined

  • Last visited

Everything posted by billington.mark

  1. I do not want an amd system, i my opinion Intel is better Your opinion is correct
  2. Ive always wondered what the correct way was to pass though hyperthreaded cores to VMs is.. not sure the GUI takes this into account as each thread is reported as a core to unraid (thats what the gui suggests anyway)
  3. disk (raw\qcow2) doesnt dictate the driver needed, the controller you specify in your xml/vmconfig does. post the disk part of your xml and we will be able to say which driver you need. if its "bus='virtio'", you need to use the viostor driver. If its "bus='scsi'" (pretty sure the GUI doesnt do scsi disks too), you need to use the virtio-scsi driver. Also, make sure you use a virtio driver thats digitally signed. 0.1.109-2 are what I use and DO work on windows 8 and windows 10: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.109-2/virtio-win-0.1.109.iso
  4. if it doesnt need to be anything special. Asus offer free dynamic DNS with their routers. you get a x.asuscomm.com domain
  5. I have 2x graphics cards, each are passed through to VMs, no on board graphics. (windows 10 and Openelec) One of the graphics cards gets the console on boot, then my W10 VM steals it when that starts up. Even if you shut down the VM, the console doesnt return, you can still however access through SSH and the webGUI is always available. (I use SeaBIOS on my VMs, not sure this is the case with OVMF) There has been whispers of a hotkey switch between passed through graphics to jump back to the unraid console to be a feature in 6.2 though. KVM is still pretty new to unraid, so plenty of time to mature over the next few versions.
  6. For windows the latest you can use are v0.1.109-2. versions since then are not digitally signed and wont install in the windows setup. you can update to the latest after installation but you'll need to read up on how to install non digitally assigned drivers on 64bit windows. Also to be honest, performance back to v0.1.96-1 arent any different to the latest so i wouldnt worry about not having the latest drivers installed. performance improvements will come from updated versions of the hypervisor anyway. changelog is here: https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG
  7. Personally, id trust gparted more than windows to resize the partition prior to resizing the qcow file. gparted will move any data that needs to move as part of the partition resize operation. no need to do any defrag etc inside windows.
  8. not done it myself but id imagine the steps you'd need to take would be: BACKUP YOUR QCOW IMAGE FILE boot your exisitng windows VM from a gparted ISO and resize the partition Add an ISO: disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='mnt/user/Software/GParted.iso'/> <backingStore/> <target dev='hda' bus='usb'/> <readonly/> <alias name='usb-disk0'/> </disk> Turn off the VM resize the qcow image from the unraid command line: qemu-img resize /path/to/img.qcow -10GB Boot the VM Id be careful to ensure that the size of the partitions inside of the qcow files are smaller than what you are re-sizing the file to. Give yourself a GB or so of wiggle room, then just expand the partition once you've re sized. Not done any of the above before, but if you take a backup, you can revert if things go pear shaped.
  9. Reading up, I don't think it's possible. The only way I can see this working is a bootloader on a normal vm disk that can then boot off a passed through controller. I've not done any testing to verify though, but if I went down this route again, that would be the approach I'd take
  10. sound wise, yes things are working fine... I have other gripes at this point but hopefully things mature in the next version. Hopefully someone else passing through a pci sound card can point you in the right direction. would the MSI fix not apply in your situation? http://lime-technology.com/wiki/index.php/UnRAID_6/VM_Guest_Support#Enable_MSI_for_Interrupts_to_Fix_HDMI_Audio_Support
  11. Welcome to my club! http://lime-technology.com/forum/index.php?topic=43931.0
  12. Could you share your AS SSD results after you moved your share back to cache? Im having SSD performance issues inside VMs and would be nice to compare performance figures
  13. This is pretty much the case for any Dell PERC Card being used on a non-Dell motherboard.
  14. I had the exact same issue with a H310 SAS controller. I couldnt get windows to see the SSD on the controller, but i suspect that was a driver issue. I did however get through an ubuntu installation, but couldnt select the SSD as a boot device once the installation finished. I gave up soon after and reverted back to a working setup. not the reply you were hoping for, but if you do get any helpful responses, i'll be using the info for my own setup and will report back with any results.
  15. Personally, ive never been able to get OVMF VMs to run properly, so ive stuck with seabios for the time being. no difference in performance as far as im aware. also depends on if the device being passed through is UEFI friendly, which might not be the case with the sound card. apparently some updates in 6.2 have made ovmf a little more friendly though, so if you can hang fire until then? In the meantime if you want to get things up and running, id suggest creating the VM using seabios, however it will more than likely require an OS reinstall.
  16. post your PCI Device list and IOMMU group list (Tools>System Devices) Also, does your machine show in the Info (top right of GUI) that HVM and IOMMU is enabled?
  17. try and pass through the sound cards PCIe bridge too as thats in the same IOMMU group: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> </hostdev> if that doesn't work, take out the PCIe bridge above and look into enabling ACS Overide (Settings>VM Manager>PCIe ACS Override: Enable). Should pass through fine after that.
  18. I have the exact same sound card, so I feel obliged to help! Add this between <devices> and </devices>, Add it directly after a </hostdev> line, so you dont break any other virtual devices: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> </source> </hostdev> Be aware that because this has been added in the XML editor, if you ever change anything on this VM in the GUI editor, this bit will be stripped out and you'll need to add it in again. For future reference, you can do this with other devices too (eg, USB3 card). just change the bus\slot\function to match the device code in your PCI device list. So your sound card is 04:02.0 , so that translates to bus='0x04' slot='0x02' function='0x0' in the VM address domain.
  19. Had chance to experiment again over the last few days. TLDR is that ive not managed to make any improvements. I played around with the virtio-blk controller and tested the x-data-plane settings. zero changes. <qemu:arg value='-set'/> <qemu:arg value='device.virtio-disk0.scsi=off'/> <qemu:arg value='-set'/> <qemu:arg value='device.virtio-disk0.config-wce=off'/> <qemu:arg value='-set'/> <qemu:arg value='device.virtio-disk0.x-data-plane=on'/> I also tested some extra bits on the virtio-scsi controller to increace the number of queues, but again, something is stripping off the extra parameters ive added on: <controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/> (num_queues='8' gets removed before the vm boots). Very frustrating as id much prefer an error stating "no, you cant do that, you idiot" rather than something checking for bits it feels aren't necessary! even manually editing the xml in '/etc/libvirt/qemu' removes any extra bits i add. Would be nice if i could turn off that 'feature', as its quite annoying. With the libvirt update coming in 6.2, im hoping the iothread (previously x-data-plane) options can be added to the blk controller and they dont get stripped out (can you tell im bitter??). Other than that, i think the ultimate performance fix would be to find a cheap-ish 2 or 4 port SATA3\SAS controller that can achieve full 6Gb/s throughput and is compatible as a boot device inside seabios or OVMF. Does anyone know of such a card?? JonP, are you able to expand on what updates are included on the VM side in 6.2? Just a libvirt update or is qemu, seabios and ovmf updated as well? (Not sure if they all come hand in hand or not)
  20. ypu can get add-on cards with the headers for front panel connectors... To save headaches, thats definitely the route i would take!