1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. I think that would make some of my vm's a bit wonky (or so I think I've been reading.) I'll just deal with it until the next stable version.... maybe.....
  2. 6.1.9 Reinstalled with a new download a few days ago. This discrepancy was present before the reinstall. When unRaid was first installed, the cpu's in the GUI match what shows when querying in ssh. But somewhere along the way, it changed.
  3. So, the unRaid GUI shows one set of numbering/pairing for physical/ht, and then viewing it via ssh shows another.... This is on a dual processor machine. This discrepancy is actually on all four machines I have that are the same model.
  4. 1812

    GPU oddities

    Doesn't seem to affect anything whether enabled or disabled.
  5. does it show up in the bios of the computer?
  6. 1812

    GPU oddities

    tried the vbios. no change. windows 10 vm gives me a geek bench 4 gpu score of 45,500. Higher than the mac, but I don't know what that means in comparison to where it should be, or if that is where it should be.
  7. 1812

    GPU oddities

    I have not but will work on it. Also, after updating nvidia drivers in the windows vm, cinebench refuses to run saying there is no openGL video card on the system... so perhaps an nVidia bug in both drivers with cinebench.
  8. Weird GPU issue It appears at times, the GPU isn’t running "full" in the vm’s. Card: EVGA GTX 760 SC Computer: HP DL380 G6, dual processor Xeon E5520f, 24 gb of Ram (72 available), vm’s on cache drive. when benchmarking, I get the following: Cinebench shows between 22-28fps on the mac and the same on windows 10 vm. When I removed the card and put it on a bare metal machine with an i5-6600 and 8gb of ram, it scored about 125fps. For the longest time, I thought the drivers weren’t loading correctly in the virtual machines. On the mac, using OpenGL extensions viewer, I was able to verify the nvidia drivers are loaded (contrary to what the nvidia control panel says.) The basic openGL cube testing shows an average of 420 fps on 720p, and 350 on 1080p on a single cube. Multiple cubes is obviously less, but never below 60fps. But it appears to be using the card to its fullest extent (or at least greater extent,) vs Cinebench not being able to do so. Geekbench 4 API score (mac vm) for the card shows 39188 (is this good? I searched and couldn’t tell) On windows vm, while watching a bluray mkv or hd youtube, a single core (of a multi-core vm) shows maxing out and causes video glitches. This was when running the machine as i440fx-2.3. After I switched to Q35-2.3 (same as the mac vm, it seemed to distribute the load across the cores and it didn’t happen anymore. On the mac and windows vm it shows the pci interface on this card being x0. I’m sure this is a glitch? In unRaid,using lspci -vv, it shows the card connected at x8, which is correct for the slot it is in. Other things I've done: isolating cpus, using only physical cores, changing the number of cores, all does nothing to vary the score by +/- 5% in Cinebench. I looked around online and some people speculate that on the mac vm, power management limits the GPU based on the mac model number (3,1... 14,2, etc...) So I modified my simbios using chameleon wizard and used several different versions, but nothing changed the cinebench score. So after reading all that, you're thinking "what's the problem?" Well, my concern is that is cinebench can't access the card fully using openGL and I'm trying to figure out why. And secondly, what other problems could this be an underlying symptom of having? woo hoo.
  9. is this a network share or connected to the computer via usb/whatever? How is it mounted in unRaid?
  10. have you tried passing the second card to the working first vm to make sure it works? Also, if you're on 6.2, it does seem that q35-2.5 fixes several issues.
  11. The issue was I was copying xml from the working windows vm setup as pc-i440fx-2.3 into the os x vm setup as pc-q35-2.3. I changed the windows machine to pc-q35-2.3 and the resulting xml changed to <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=10:00.0,bus=pcie.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=10:00.1,bus=pcie.0'/> I used that, and then it passed through just fine.
  12. figured it out. will post details shortly for those who might be interested in the future. It was a simple oversight after all.
  13. Still new at a bunch of this, so maybe I overlooked something? Card is a EVGA GTX 760 SC. I was having issues getting windows 10 to use it until i added the following to syslinux.cfg append intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream initrd=/bzroot It was recommendation from the log to get iommu to work on my machine (hp dl380 g6) the GUI then generates the following xml for the video card: <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=10:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=10:00.1,bus=root.1,addr=00.1'/> </qemu:commandline> When I try to use the working gpu xml from windows 10 for el capitan, I get the following: Execution error internal error: process exited while connecting to monitor: 2016-09-04T14:09:27.667249Z qemu-system-x86_64: -device ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1: Bus 'pci.0' not found PCI device 10:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 760] (rev a1) 10:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1) IOMMU Group /sys/kernel/iommu_groups/22/devices/0000:10:00.0 /sys/kernel/iommu_groups/22/devices/0000:10:00.1 El Capitan vm works fine with vnc and screen share, so i know the xml is fine before adding in the graphics card. Please ignore cpu pinning differences as I haven't updated one of them. Windows 10 xml <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Win 10 virt</name> <uuid>2e61f1f9-fa41-66c6-1ed8-7145f9bf60c2</uuid> <metadata> <vmtemplate name="Custom" icon="windows.png" os="windows"/> </metadata> <memory unit='KiB'>8912896</memory> <currentMemory unit='KiB'>8912896</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>14</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='7'/> <vcpupin vcpu='6' cpuset='8'/> <vcpupin vcpu='7' cpuset='9'/> <vcpupin vcpu='8' cpuset='10'/> <vcpupin vcpu='9' cpuset='11'/> <vcpupin vcpu='10' cpuset='12'/> <vcpupin vcpu='11' cpuset='13'/> <vcpupin vcpu='12' cpuset='14'/> <vcpupin vcpu='13' cpuset='15'/> <emulatorpin cpuset='0-1'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='14' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disk1/win vm/Win 10 virt/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/win vm/Win10_1511_1_English_x64.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/win vm/virtio-win-0.1.102.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:5f:c5:bd'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Win 10 virt.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x04ca'/> <product id='0x006d'/> </source> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=10:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=10:00.1,bus=root.1,addr=00.1'/> </qemu:commandline> </domain> El Capitan XML <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Test</name> <uuid>0ba39646-7ba1-4d41-9602-e2968b2fe36e</uuid> <metadata> <type>None</type> </metadata> <memory unit='KiB'>25165824</memory> <currentMemory unit='KiB'>25165824</currentMemory> <vcpu placement='static'>14</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='4'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='8'/> <vcpupin vcpu='4' cpuset='10'/> <vcpupin vcpu='5' cpuset='12'/> <vcpupin vcpu='6' cpuset='14'/> <vcpupin vcpu='7' cpuset='3'/> <vcpupin vcpu='8' cpuset='5'/> <vcpupin vcpu='9' cpuset='7'/> <vcpupin vcpu='10' cpuset='9'/> <vcpupin vcpu='11' cpuset='11'/> <vcpupin vcpu='12' cpuset='13'/> <vcpupin vcpu='13' cpuset='15'/> <emulatorpin cpuset='1'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.3'>hvm</type> <kernel>/mnt/cache/vm/200GB_files/enoch_rev2795_boot</kernel> <boot dev='hd'/> <bootmenu enable='yes'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>core2duo</model> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/cache/vm/200GB_files/200GB.dmg'/> <target dev='hda' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='pci' index='3' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:12:34:56'/> <source bridge='br0'/> <model type='e1000-82545em'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <memballoon model='none'/> </devices> <seclabel type='none' model='none'/> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd'/> <qemu:arg value='-device'/> <qemu:arg value='usb-mouse'/> <qemu:arg value='-device'/> <qemu:arg value='isa-wokka-wokka-wokka!/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=10:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=10:00.1,bus=root.1,addr=00.1'/> </qemu:commandline> </domain> Thanks for any assistance!
  14. -not an expert here From my own testing, I get a little bit faster response/transfer speeds/performance running a vm from the cace vs unassigned devices.
  15. I get about 3-5% loss in cpu performance, but I'm also using older business class hardware.
  16. What file transfer speeds are you all getting? With my vm on a ssd cache disk, I am only seeing a constant transfer rate of about 45 MBps with peaks up to 50MBps either direction. It's ok.... but with my Windows 10 vm I hit about 90MBps average with peaks up to 100MBps (with the same test file.) These are listed speeds coming from a different server (and verified from that server) and not taken from inside the vm's themselves. Both are using br0 for networking (4port onboard nic.) I've also done 5 simultaneous file transfers from the cache drive to there physical computers and hit about 2Gbps, so I don't think the nic is the problem. I've actually created 3 different OS X vm's and they all have the same sort of speed limit on them of about 45MBps. When doing a drag and drop file transfer to the cache disk on the network from a physical computer, I get a little over 100MBps, so I don't think the disk isn't the source of the problem. Thoughts?
  17. So now the win 10 vm is acting up. It cycles writes to the virtual disk just like the OS X vm, but not when sending to the Nas. So am I hitting some sort of network or write buffer on the machine or unRaid software? The ssd is maybe 2 months old and seems to be fine so I'm not sure that is the issue. CPU usage never hits above 20% except for the random peak. I have 24GB of ram, each machine has 8-10GB but aren't on at the same time leaving 14-18 free/cached. .....
  18. Yes, it shows gigabit in network utility. I even tried changing network>hardware from auto to 1000baseT to ensure that wasn't a problem, but no difference in speed. Displayed transfer speed in the vm increase slightly when sending files to and from the cache disk where the vm is stored but seem to fluctuate more wildly dipping as low as 4-6MBps. ---update I was reading that at times you can't trust the listed speeds in the vm, so watching them via unRaid shows huge swings when writing to cache or NAS. I've attached screenshots from a 3.75GB files transfers. When going from NAS to OS X vm, it appears to receive data (not at full gigabit speed) then pause receiving and write to the cache disk. When transferring data from the vm to cache and back, the same write cycle appears. The ssd I'm using is connected to the DL380's dvd sata port since unraid doesn't recognize the p4xxx raid card in the machine (i'm waiting to get a new controller soon.) Maybe just a bad install? I'll do some win 10 transfers with the same file and post them up in a while for comparison.
  19. OK. Very new to all this. Learning how it all works. Win 10 vm transfers about 115+MBps to and from NAS or any other device on the network. OS X el Capitan vm transfers about 30-40MBps to and from NAS or any other device on the network. Both use same br0 to access network. Thoughts about why? Is this normal with OS x in a vm? Hardware: DL380 G6 xml is as follows: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>OSX-El-Capitan-10.11-VNC</name> <uuid>0ba39646-7ba1-4d41-9602-e2968b2fe36d</uuid> <metadata> <type>None</type> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>15</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='9'/> <vcpupin vcpu='10' cpuset='10'/> <vcpupin vcpu='11' cpuset='12'/> <vcpupin vcpu='12' cpuset='13'/> <vcpupin vcpu='13' cpuset='14'/> <vcpupin vcpu='14' cpuset='15'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.3'>hvm</type> <kernel>/mnt/disk1/vm_files/enoch_rev2795_boot</kernel> <boot dev='cdrom'/> <bootmenu enable='yes'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>core2duo</model> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/disk1/vm_files/ElCapitan.dmg'/> <target dev='hda' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <interface type='bridge'> <mac address='don't worry about it'/> <source bridge='br0'/> <model type='e1000-82545em'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='vmvga' vram='16384' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='none'/> </devices> <seclabel type='none' model='none'/> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd'/> <qemu:arg value='-device'/> <qemu:arg value='usb-mouse'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=MUAHAHAHAHA, I OWN LIKE 17 MACS SO THIS SHOULD BE OK!!!'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> </qemu:commandline> </domain>
  20. I am completely new at this but had a similar problem and fixed it, so forgive me if you've already done all this: under settings>network settings did you set "Setup Bridge" to Yes? and if so, did you leave the default bridge name "br0" or change to something else? If you changed it, take note of the new name. In your vm setup, did you use "br0" as your Network bridge or whatever it says the bridge name is under network settings? When I ran my first install, I accidentally set the Network Bridge in VM creation to "vibr0" with isolates the vm from the rest of the network by assigning it an ip from the host out of the rest of the network's range. After changing it back to "br0" it takes it's ip address from the router like all the other devices on the network. again, if you've already done this and it's all correct, then I've got nothing else to offer except my own problems with getting the vm to recognize the full speed of a bonded connection.