Jump to content

SpaceInvaderOne

Community Developer
  • Posts

    1,747
  • Joined

  • Days Won

    30

Everything posted by SpaceInvaderOne

  1. ok give this xml a try i have edited it a bit. The bits i changed were are in bold </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> so paste this xml in below over yours and pop the osk key in and report back. <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>MacOS</name> <uuid>ae2697b9-a43d-c0ed-764d-7ebcc9a29373</uuid> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="arch.png" os="ubuntu"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='13'/> <vcpupin vcpu='6' cpuset='14'/> <vcpupin vcpu='7' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/ae2697b9-a43d-c0ed-764d-7ebcc9a29373_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/VMs/macOS/macOS.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:51:66:48'/> <source bridge='br0'/> <model type='e1000-82545em'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='none'/> </devices> <seclabel type='none' model='none'/> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd'/> <qemu:arg value='-device'/> <qemu:arg value='usb-mouse'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=REMOVED'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,vendor=GenuineIntel'/> </qemu:commandline> </domain>
  2. please post your xml for the vm. (just remove the osk key from iy before you post please)
  3. No there are isnt the driver support in the offical image for the vnc graphics. As libreelc is a media endpoint using vnc would be terrible to use with this vm?
  4. motherboard Asrock Fatal1ty X99M Killer Usage To upto 3 vms at once with separate gpu pass through. Plus points Great x99 board iommu groups are separated well. No need for acs override. Minus points Onboard usb controller will not passthough. You will need a separate pcie usb if you want to pass through a controller. No onboard gpu. GPU MSI 1070 Seahawk zotac 1070 Founders Usage fedora Windows gaming VMS Plus points Easy to passthrough with great performance minus points As with all Nvidia will not work as primary gpu (with no onboard graphics) without a rom dump. Doesn't work in osx GPU MSI GTX 750 Usage Fedora Libre elec windows osx Plus points Easy to passthrough and works with windows linux and osx minus points As with all nvidia will not work as primary gpu (with no onboard graphics) without a rom dump.
  5. Now I have been thinking today whilst driving home and stuck in traffic. so thought I would suggest my thoughts on the whats happening. I dont think the guest operating system makes any difference to how to pin or assign cpus. What we are forgetting here is that it is not the guest OS that controls the cpu assignment and use of our cores. So that shouldnt change for which guest os in being run? To the guest it only ever has a vcpu. It doesnt know if it is on a hyperthread on the host or not. But host os will assign how those cpus are presented to the guest for instance in my osx vm i use this. <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='2'/> </cpu> and then so sierra boots at the end of my xml i put <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,vendor=GenuineIntel'/> This way I get the osx OS to see my cpu like this. 1 cpu 4 cores https://s30.postimg.org/rqw8enfwx/topology.png However I know alot of people dont use host passthrough mode on their OSX xml and use this <cpu mode='custom' match='exact'> <model fallback='allow'>Penryn</model> <vendor>Intel</vendor> </cpu> This then the guest is seeing this as 8 single core cpus https://s28.postimg.org/80c8abanx/no_topology.png So i would like to ask 1812 what his xml looks like for the cpu topology as I think this may cause a difference?
  6. Thanks 1812 for taking the time to do all those tests. Interesting results. Would be interesting to see some the difference in between Proc 1 cpu 0 <===> cpu 8 cpu 1 <===> cpu 9 cpu 2 <===> cpu 10 cpu 3 <===> cpu 11 Proc2 cpu 4 <===> cpu 12 cpu 5 <===> cpu 13 cpu 6 <===> cpu 14 cpu 7 <===> cpu 15 A vm1 assigned cores 4-7 vm2 assigned cores 12-15 B vm1 assigned cores 4-7,12-15 vm2 assigned cores 4-7,12-15 and see A or B gets higher scores.
  7. Exactly thats why trying to copy the docker image with krusader is not good. Because krusader is a docker then docker image is mounted and in use. CA Appdata Backup / Restore works by shutting down the all running dockers and then copying all the appdata (unless you have excluded some folders) then restarts the dockers afterwards.
  8. Do you mean like this? bingo. Yes, its fine to set shares to use cache "yes" then let mover move them then set share setting back. Its the official way and probably easier to set the cache to yes and use mover. The way I use in my video is just my personal preference and the way I do it personally. My reasons I don't use mover for that part is as follows 1. Mover will move data from the cache to the array not just copy it. So old cache will be blank afterwards. I prefer to copy the data manually so the data remains on the old cache as well and isn't removed. Yes, it's a little more effort to manually copy each share. Its worth it to me, just in case anything goes wrong during the process I have the original cache drive intact and is a "backup". For example, in the past, I have had an SSD fail after a few days, so that makes me cautious. So I like to keep my old cache for a bit then if the new SSD cache should fail then I can just pop the old one back in. (Also mover will only move data that is in a share. Sometimes it is possible a user may have data on the cache that isn't part of a share (although unlikely for most people)) 2. Mover doesn't give an estimated time to complete as no progress bar. I like to work through it manually and I have a good idea of how long there is left to complete it. The disadvantage of my method is you can't just leave it and go away. Ie let it run overnight Like I said this is just how I do this when upgrading to a larger cache drive but either method will give the desired result
  9. Hi guys this week i have made a tutorial about how to add a cache drive to your server. It also shows how to upgrade or replace an existing cache drive without loosing data Also you will see how to create a raided btrfs cache pool. Hope you find it useful! How to add a cache drive, replace a cache drive or create a cache pool
  10. The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows. Yes that is to be expected if the situation is like this. my cpu pairing is like this cpu 0 <===> cpu 14 cpu 1 <===> cpu 15 cpu 2 <===> cpu 16 cpu 3 <===> cpu 17 cpu 4 <===> cpu 18 cpu 5 <===> cpu 19 cpu 6 <===> cpu 20 cpu 7 <===> cpu 21 cpu 8 <===> cpu 22 cpu 9 <===> cpu 23 cpu 10 <===> cpu 24 cpu 11 <===> cpu 25 cpu 12 <===> cpu 26 cpu 13 <===> cpu 27 so if i pinned 8 vcpus 0,1,2,3,4,5,6,7 then they would be over 8 separate cores so would get good performance as those cores would not be doing anything else, But, Then if i set up another vm with 8 vcpus 14,15,16,17,18,19,20,21 this would be sharing the same 8 cores and when both machines run at once then performance would be bad. So over those 8 cores 2 vms should be, vm1 . 0,1,2,3,14,15,16,17 vm 2 4,5,6,7,18,19,20,21 That way there is no overlap. Hope that makes sense yes and no. If you have a bunch of cores, then you can run them non HT and get good performance. If you don't, you get reduced performance. Just for kicks, i'm going to setup a couple 4 core vm's tonight on each other's pair and run some simultaneous tests... I do expect degraded benchmarks, but I'm curious to know if it's better to have 1 vm on it's own ht pair, or let it share with another vm, which may not be using the ht pair at the same time as the vm, therefore lessening the performance hit vs using ht pairs... (sorry if that's confusing, still on a bunch of medicine... results in a few hours though...) I think the worst problem of sharing CPU hyperthreads is the latency that can be caused in the VM. A lot of choppy sound etc. (I wish I had some cold medication as I feel real bad, that's why I'm still up. This cold caught me by surprise, the girlfriend says its because I popped outside without a coat but I refuse to believe that!)
  11. Thank you for your suggestion, Darkun1. Unfortunately, my bios doesn't seem to offer such a setting. fwiw, I also tried passing through the on-board Renesus USB 3.0 controller with mixed results: it seemed to work ok with Win10; however the Win7 drivers are unstable and would often times crash or prevent the VM from loading. I would love to find a solution since the only way to access the attached external drives are as network share, which is extremely slow. Anyways, again, thanks for your suggestion. With the Inatek USB 3.0 if you can try another PCIe slot for it. Also, check nothing else is in its iommu group. Maybe worth a shot, you could try using pci-stub.ids= instead of vfio-pci.ids= in the syslinux.config. Also with the onboard USB controllers and the windows 7 problem you could try in bios changing EHCI Hand-off from [Disabled] to [Enabled]
  12. The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows. Yes that is to be expected if the situation is like this. my cpu pairing is like this cpu 0 <===> cpu 14 cpu 1 <===> cpu 15 cpu 2 <===> cpu 16 cpu 3 <===> cpu 17 cpu 4 <===> cpu 18 cpu 5 <===> cpu 19 cpu 6 <===> cpu 20 cpu 7 <===> cpu 21 cpu 8 <===> cpu 22 cpu 9 <===> cpu 23 cpu 10 <===> cpu 24 cpu 11 <===> cpu 25 cpu 12 <===> cpu 26 cpu 13 <===> cpu 27 so if i pinned 8 vcpus 0,1,2,3,4,5,6,7 then they would be over 8 separate cores so would get good performance as those cores would not be doing anything else, But, Then if i set up another vm with 8 vcpus 14,15,16,17,18,19,20,21 this would be sharing the same 8 cores and when both machines run at once then performance would be bad. So over those 8 cores 2 vms should be, vm1 . 0,1,2,3,14,15,16,17 vm 2 4,5,6,7,18,19,20,21 That way there is no overlap. Hope that makes sense
  13. What error are you getting installing it. As an alternative to installing with installer,you can just open the efi partition with EFI mounter and manually put the clover files into it. My orginal video on installing sierra has those files in description (but not 3974) i cant link you 3974 to manually paste in as not at home
  14. Alright perfect ! So since i'm not using very intensive docker apps, And file transfer to and from the server is rare. My drive setup SHOULD be okay in theory. It could probably be a bit better but I'm sure I'll learn more as time comes. I've just installed the unassigned devices plugin just to check it out a bit more. I've also just learned how to isolate the cpus for vm's from unraid so that should help. Going to restart the server shortly and see how it goes. I find when having problems with cores in VMS is not isolating the cores from unraid. I prefer to leave them all available but as 1812 says if isolated from the host system then they arent going to be used by unraid so this can be an advantage sometimes. But I feel only people who have a lot of cores can afford to isolate cores from unraid as they have plenty but for someone who only has 4 cores do you want to only have one available for unraid? I personally think it is best to leave all cores not isolated. Remember if you are worried about dockers using your VM cores then pin your dockers to non-vm cores. Pin your VMS to cores avoiding the first core in your system as unraid prefers lower numbered cores. Emulatorpin to a core your VM isn't using. If you have enough cores then pin emulatorpin to a core nothing else is using. I would rather have use 4 cores for a VM of which 3 are pinned to the VM and one for emulatorpin. Rather than 4 cores for the VM then the emulatorpin to a shared core with unraid dockers etc. Aswell never split cores always pin your cores whole as in hyperthreaded pairs if your CPU has hyperthreading. Splitting cores will just give bad performance. I find running my VMS from SSD cache is best for me. When I am running a VM there is rarely any activity on the cache drive from writes to cached shares. Maybe the girlfriend is streaming some show from on emby but that will come from the array, not cache.
  15. Are you trying to pass through your gtx 1080 as that isn't supported in OSx as yet hence it would just say generic adaptor.
  16. Yes if you pass through the the disk to an osx vm you will need the have the bus set to sata in the xml
  17. Hi I have made a guide about disks and vdisks in unRAID VMS. How to pass through physical disks to VMS. How to convert a vdisk to a physical disk. How to convert a physical machine to a VM. It also talks about performance testing disks in VMs. Hope you like it!! How to Passthrough Harddrives, Convert Disks and test Vdisk Performance in unRAID vms
  18. Thanks for the pointers. I made a bunch of changes, removed xvga from the xml, upgraded to Sierra, changed smbios setting in clover to match an older mac, etc. And IT WORKS!!! Finally. It recognized the card and without any boot arguments or installing web drivers, it worked. Phew, spent so many hours Now I need to get audio working ;-) EDIT: Got hdmi audio working thanks to another one of your posts http://lime-technology.com/forum/index.php?topic=51915.msg524900;topicseen#msg524900 :-) Thanks so much Hi aptalca try using this hdmi kext https://www.dropbox.com/s/1f39m1bew9uhyio/HDMIAudio-1.1.dmg?dl=0 mount the dmg open terminal then cd to image cd /Volumes/HDMIAudio then run script to install ./install.sh I found this worked for a few cards that i have tried before.
  19. It sounds like when it starts it is outputing out of a different port on the graphics card. Can you try connecting with display port, dvi or hdmi and see if that makes a difference? edit..... also try removing the hdmi cable after it has booted then reconnecting it If I could ask you to upload the rom for the 960 somewhere (and exactly which card) so i can download it as i am trying to get together a collection of roms for passthrough for use with unRAID
  20. You may have to change settings in your bios for USB and how the controllers work. Are you trying to pass through to mac OSX? If so osx is very picky about which controllers it supports. Unfortunately, it doesn't support many USB 3 controllers natively. I had the same problem my onboard USB didn't work in osx. I bought this USB 3.0 controller https://www.amazon.co.uk/gp/product/B00FPIMJEW because it is natively supported in OSX out the box. as it is natively supported in osx
  21. are you sure that info you have posted is correct? in your iommu list you have 00:14.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB xHCI Controller [8086:8cb1]. /sys/kernel/iommu_groups/4/devices/0000:00:14.0 listed in in group 4 yet in the script output it is Bus 3 --> 0000:00:14.0 (IOMMU group 3) but here same device in group 3 so this isnt consistant. i am guessing that at some time you have enabled aci override patch as in your gpu here is in group 1 with other devices including another gpu /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/1/devices/0000:00:01.1 /sys/kernel/iommu_groups/1/devices/0000:01:00.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.1 /sys/kernel/iommu_groups/1/devices/0000:02:00.0 /sys/kernel/iommu_groups/1/devices/0000:02:00.1 so wouldnt pass through like that. I guess you are passing gpus to a vm. so since that iommu list you have changed it using aci override? these are your usb controllers 00:14.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB xHCI Controller [8086:8cb1]. 00:1a.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2 [8086:8cad] 00:1d.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1 [8086:8ca6] 0e:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142]
×
×
  • Create New...