alfredo_2020

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by alfredo_2020

  1. YES, WDS250G2X0C-00L350_182687801217 is the pcie nvme drive i'm passing through. Im 99% sure, the nvme drive is NOT mounted when i try to launch the WIN10 VM. I have auto-mount set to NO. Ill double check this, to make sure the drive is not mounted when i launch the VM, see if i keep getting this error. When the VM does bootup correctly, the nvme drive disappears from the UnAssigned list. (This is expected behavior since host, and guest can't control at same time) Ill report back when i get home tonight.
  2. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='2'> <name>Windows 10</name> <uuid>dea96215-624f-6a42-6120-a80215734cba</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>3</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.10'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/dea96215-624f-6a42-6120-a80215734cba_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='3' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:91:aa:07'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/domains/vbios/NVIDIA_GTX970_4096_140826_modified.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <boot order='1'/> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> Thank you so much, i been searching online for days, and cant find anything to my case. Alot of help on the forum, so i was able to piece part my solution, just have this last hurdle.
  3. Hello, i finally got a VM with passthrough GPU, but now im getting wierd error. The first time i launch the VM it doesnt boot up. I get the Tiano screen than goes to black and VM is shut down. 2nd Time i run it, it starts up ok. I dont understand what this line means. which is what i get in the VM log when it crashes. When it doesnt it stop at the first line. What does line 2 mean? If this is not enough information, please let me know if you need my VM-XML code. 1. 2019-01-29T01:09:14.165034Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/1 (label charserial0) 2. 2019-01-29T01:09:28.356293Z qemu-system-x86_64: terminating on signal 15 from pid 10324 (/usr/sbin/libvirtd) 3. 2019-01-29 01:09:30.580+0000: shutting down, reason=shutdown
  4. This is the trick!!! It worked I can now send 6cores to vm thank you!!
  5. This is the trick!!! It worked I can now send 6cores to vm thank you!!
  6. I’m having the same sporadic problem hasn’t happened that often but not sure if I’m just missing the model number cause I keep playing with trying to get more cores added. I use the e1000 device model value not the vmxnet3. I’m gonna try that and see if it helps. also how many cores/threads can you pass to this OS X VM IT LOOKS LIKE ITS only boring if I have 4c/8t max selected.
  7. Any progress on this? What is adding that line back after the fact? I thought XMLs were readonly at time of execution. Or is the interface adding the line when you save changes to the XML automatically?
  8. Does passing the device to the VM like this allow SSD TRIM to be performed from the guest?
  9. This didnt help my situation out still only getting 600Mbs. However i abonded this route and going to build an All in one unRaid Server + WIN10 VM(Occosinal Games) + Daily Driver + FCP (macOS Mojave). The windows VM can write at great speeds to the array cache. etc through the mapped network drive through the virtual. Getting speeds in the 600-800 Mbs range even though i connect to the server through a 1GBe mapping. But the osx only gets 150Mbs Write and 100Mbs Read. But i think that is a known problem with OSX VM. Ill keep investigating when i get more time over the winter break.
  10. @pervin_1 So were you able to get TRIM working on the SSD by passing through the entire controller? I want to do this for my M.2 PCI nvme drive and run OSX on it, but want to make sure TRIM will work, if not i think the drive will come to a crawl after heavy months of use...
  11. I have noticed that when doing OSX vm Installs, before i changed the clover bios display resolution. When the clover first boots, hold F2, to gointo the BIOS like settings, then try to change it to 1920x1080. Not sure if this will help.
  12. I was able to set jumbo packet MTU = 9000 on unRaid and Mac and it did help a bit. I now get 900MBs Read and 600MBs write. However i can't bridge my 10GBe to my 1GBe on the server and i now dont have internet access through the wired connection on my Mac. I have to use wifi. When i bridge my eth0 eth1 under the eth0 settings, it sets the MTU of both to 1500. I dont know if i can enable 9000 on both and if that will even work?
  13. Im having the same problem as this post. Here is my setup and what im doing. The only thing i have NOT done yet is increase MTU size, will try that tonight. unRaid System Specs 1TB nvme Cache Drive unRaid 6.4.1 Asrok LGA1151 MotherBoard 8GB Ram DDR4 2400MHZ 3.5G Intel Quad-Core GProcessor Asus 10GBe PCIe NIC I have a mac mini with PCIE Flash & 10Gbe. Blackmagic says about 400MBs write speed and 700MBs read speed when i write to my cache only share. I have truned off my plex docker to see if that helps but it doesnt. I am using a cat7 6ft cable. I have the mac mini and unRaid on its own ips(192.168.10.227unRaid & 192.168.10.230mac) Normal network is on 192.168.1.xxx. I have seen other posts about trying iPerf, building a ram disk etc. But it sounds like its not the network but something with unRaid not being fast enough? Any other thoughts of what to try?
  14. I have only used "User Shares" is it safe to add a new drive to my array, and only use that as a "Disk Share" for this purpose. I will go to all my other shares and make sure they exclude this new disk first. i guess my question is... IS it safe to have both User Shares + 1 Disk Share active at the same time on the same unRAID Server?
  15. so making some progress... i added the following line to the flash sysconfig file and my windows 10 VM can shut down and start again with no issues. However my macOS VM only worked 1 time even after WIN10 was opened and shutdown, but when macOS High Sierra was shutdown, its now stuck i think since no VM WIN10 or macOS will start. the logs dont through errors, but 1 core gets stuck running at 100% after the apend i added in this sequence intel_iommu=on pcie_acs_override=downstream vfio_iommu_type1.allow_unsafe_interrupts=1 + my black listed device ids, and it helped with win 10 vm but not macOS vms booting with clover. Any ideas?
  16. So i implemented this suggestion, and now i can pass-through GPU and USB port to my VMs(macOS and WIN10) but when i shut down the VM, the GPU stays initiated or locked, and if i try to launch the VM again it hangs. If i reboot unRAID issue goes away untill i shut downt he VM and try another one. Any thoughts?
  17. Any progress on this problem? I have the same issue with both, Windows 10 VM and macOS VM. I have to reboot unRAID to release the GPU so the VM can use it again. Running: Intel i7 8700K, ASUS Z370-H, Radeon Vega Frontier. latest unRAID as of 3/12/2018. I have black Listed all AMD devices (qty 4) from unRAID.