swallace

Members
  • Posts

    45
  • Joined

  • Last visited

Everything posted by swallace

  1. I've run into this issue myself with docker containers that run wild with the CPU, when I did not have CPU pinning configured such that they do not have effectively unlimited access to the CPU. I believe your problem may be caused by over-provisioing your VM. Stated differently: you have allocated the vast majority of resources to your VM, and Unraid does not have enough leftover to do the work of managing the VM. How many of your cores/threads are you currently passing through to the VM? If you're passing all 4 cores and 8 threads to your VM right now, then that's almost 100% the issue. I'd say you want to leave Unraid with at least a whole core to itself (with both it's threads). That still leaves you with 3 cores to the VM, and I would expect it to preform decently. (RAM could also be the issue, but I'm suspecting CPU based on what you've said. 32GB cant hurt, but as long as you arent passing through all 16Gb of your memory to the VM, I'd imagine you should be okay. I'd try giving maybe 12Gb to the VM and leave 4 for unraid, see how that runs, but maybe thats what you're doing already.)
  2. @Leoyzen I just setup a fresh Catalina VM from scratch using macinabox, and configured the settings you recommend, just to be sure. My VM is setup as an iMacPro1,1 ; Lilu/WEG is installed and updated via Clover Configurator, and I have setup audio and gfx to be in the same bus. You can see that the GPU is showing up as passed through to system in the following image: My GPU is a Sapphire Radeon RX 580 Pulse (Model SKU 11265-05-20G) (amazon link there, if helpful). Also, I've attached my VM's XML config below: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>MacinaboxCatalina</name> <uuid>8a3255ee-994c-46ca-ae12-a903b3eb8839</uuid> <description>MacOS Catalina</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="catalina.png" os="Catalina"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'/> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/Catalina-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/macos_disk.img'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:de:f0:40'/> <source bridge='br0'/> <model type='e1000-82545em'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0e' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0e' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> <qemu:commandline> <qemu:arg value='-usb'/> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd,bus=usb-bus.0'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk='/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/> </qemu:commandline> </domain>
  3. @igreulich This may or may not be your whole problem, but 100% part of it is you are trying to pass through all your CPU cores and all of your RAM. Unraid needs some portion of your resources in order to host the VM in the first place. @infinitycell asked a very similar question yesterday and had the same problem. Read a few of the posts starting here: @jonathanm Had a much more detailed response here: Once you get that issue corrected, it should boot easily, but you may still have sound issues until you set your GPU and GPU audio passthrough to utilize the same virtual bus, slot, and configure them as multifunction devices. This video goes into more detail on that:
  4. @pingustar that’s weird. Try it both with and without the GPU vbios. Make sure you’re passing through the sound portion of your GPU as well, and also try getting them on the same virtual slot/bus configured as a multifunction device. this video should be helpful with that last part:
  5. I’ve run into this issue before. I think something in the XML gets screwed up when switching between, adding, and removing Graphics passthrough. If you’re using macinabox, I’d recommend deleting the XML file for your VM (the actual file, not its contents) and letting macinabox generate a new one. It ought to boot afterwards into VNC, but you’ll have to go back through to customize your config.
  6. @tjb_altf4 that’d be a slick setup. Given it’s a macOS setup, @nlash is likely already utilizing Clover or Opencore in his macOS VM. Not exactly sure how I’d go about getting a windows VM to show up there as well, but it should be do-able. I’m thinking you’d have to add a second Vdisk with the right EFI setup so it shows up in the boot loader?
  7. @Leoyzen I’m running macinabox and ostensibly have everything working perfectly, but cannot for the life of me get hardware acceleration working for the RX580 I have passed through, despite the fact that it’s outputting a picture fine, shows up in system info fine, and I have the latest versions of Lilu and WEG installed. I’m considering killing the whole VM and building it by manually like you mentioned to see if that helps resolve the issue. Do you have any resources you’d recommend on getting started setting up the VM manually?
  8. @solojazz Did you have similar issues when testing using VNC? If so, you might not have enough resources dedicated to the host, causing the whole system to be slow and glitchy like your seeing. Seems weird though. Have you tried a different keyboard and mouse? If it occurs with multiple, that narrows things down a bit potentially.
  9. @pingustar The CPU topology line causes the VM to not boot. Try removing it from your XML and see if it boots up correctly: <topology sockets='1' cores='2' threads='2'/>
  10. @infinitycell unless I’m misinterpreting your XML, it looks to me like you’re allocating all 16 of your cores (and all their hyper threads) to your macOS VM? If that’s the case, you have no performance because you need to leave some leftover for unraid as your hypervisor. Try leaving a single whole core (both threads) left over for unraid and I bet the VM preforms much better. Also be sure to leave unraid some RAM too.
  11. @widdefmh glad to hear it! Hoping to get my issues worked out sometime soon. I think it may be related to power draw. I have one PCI power cable coming out of my PSU with 2 4 pin power headers on it. I believe it’s meant to power a single card with an 8 pin connector, not two cards each with a 4 pin connector. Both GPU’s work and drive screens at full res, but I think it’s affecting performance under the hood. Gotta dig up the modular cables for my PSU and give both cards a dedicated cable. Really hoping this is it, because software wise everything seems perfect.
  12. Double check your XML. If it’s just a black screen, you likely forgot to remove the CPU topology line when you modified the VM in the GUI. Easy mistake. If it’s not that, or a failure to manually make the couple other changes you have to make to the XML every time you change a setting in the VM manager GUI, I’m not sure.
  13. I’ve successfully setup a Catalina VM using macinabox and it all works great. My host is running on a 3950X with a GTX 1080 for a Windows VM, and a Sapphire Pulse RX 580 configured for macOS pass through. I’m having an issue related to GPU psd through. The GPU functions perfectly, except that hardware acceleration seems to be completely botched. The RX580 shows up in system info no problem, but does not have metal acceleration listed. Chrome (and chromium based apps like discord, Spotify, etc) all exhibit very odd graphical errors unless hardware acceleration is turned off. I used a video editing program called Video Proc, which has a setting page for easily checking hardware acceleration status, and everything showed up as blank. I’ve rebuilt the VM from scratch several times and the same issue always comes up, even as the rest of the VM works flawlessly. I’ve tried changing the smbios to iMac Pro 1,1, and verified that I’m running the latest versions of Lilu and WEG kexts in clover configuration. I’m really not sure what else to try at this point. Any ideas would be appreciated. I don’t currently have access to my VM’s XML template, I’ll be updating this post with it shortly.
  14. @widdefmh it’s definitely easier with AMD cards, though a lot of them suffer from the reset bug which can be a pain.
  15. @phyzical are you asking how to run virtualization software inside your macOS VM? Like parallels or VMware software? Or are you having difficulty getting the macOS vm working? If the latter, watch this video and follow steps by step, should be helpful:
  16. @SpaceInvaderOne Thanks a bunch for this! I'm working on getting this up and running on my Unraid box and kept running into an issue when I'd try to add CPU cores. I didnt catch that these lines were being altered when I reset as the video did not mention them and I was not observant enough on my own. I was just about to run a diff between a copy of the original XML and its altered state after applying in form view when I found this comment. Saved the day! Or, at least, a lot of time
  17. @qy2009 If you're using the repo I mentioned you dont even need your own synoboot.img. It's provided by the repository. Just use the template I linked and you'll be good to go. I was never able to get it working with the latest version of DSM, and was faced with effectively rebuilding the project from scratch to get it up and running. I may have been able to get it eventually, but as the only reason I wanted synology in the first place was for the amazing Cloud Sync application, I decided to give up and use a work-around. For anyone curious, I'm running a lightweight linux VM with the Insync application to backup my entire Unraid box to my G Suite account utilizing the Unraid Share functionality with my VM. Works great! Still would prefer synology's cloud sync I think, but getting it working in docker, and then trying to get the host shares mounted via 9p in docker was just too much headache.
  18. This project does still function, but it unfortunately uses the somewhat old DSM version of 6.0.2-8451. I'm hoping to get it running with the latest DSM 6.2.2-24922, but have not quite gotten there yet. In the meantime, for anyone trying to get it up and running, below is the XML template for Unraid that gets it all up and running (I've also attached the file for download). Note: Once you are up and running, you can access the DSM VM via hostIP:5000. Make sure to login with the following default credentials: user: admin pass: 123456 <?xml version="1.0"?> <Container version="2"> <Name>xpenology</Name> <Repository>segator/xpenology</Repository> <Registry/> <Network>bridge</Network> <MyIP/> <Shell>sh</Shell> <Privileged>true</Privileged> <Support/> <Project/> <Overview/> <Category/> <WebUI/> <TemplateURL/> <Icon/> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1580171785</DateInstalled> <DonateText/> <DonateLink/> <Description/> <Networking> <Mode>bridge</Mode> <Publish> <Port> <HostPort>5022</HostPort> <ContainerPort>22</ContainerPort> <Protocol>tcp</Protocol> </Port> <Port> <HostPort>5000</HostPort> <ContainerPort>5000</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data> <Volume> <HostDir>/mnt/user/appdata/xpenology/image</HostDir> <ContainerDir>/image/</ContainerDir> <Mode>rw</Mode> </Volume> </Data> <Environment> <Variable> <Value>Y</Value> <Name>AUTO_ATTACH</Name> <Mode/> </Variable> </Environment> <Labels/> <Config Name="AUTO_ATTACH" Target="AUTO_ATTACH" Default="" Mode="" Description="This variable is necessary due to the use of segator/kvm. See here: https://github.com/segator/kvm#running" Type="Variable" Display="always" Required="false" Mask="false">Y</Config> <Config Name="image" Target="/image/" Default="" Mode="rw" Description="Container Path: /image/" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/appdata/xpenology/image</Config> <Config Name="SSH Access" Target="22" Default="" Mode="tcp" Description="This allows SSH access to the Synology DSM VM via host port 5022" Type="Port" Display="always" Required="false" Mask="false">5022</Config> <Config Name="Web GUI Access" Target="5000" Default="" Mode="tcp" Description="This allows access to the Web GUI of the Synology DSM VM via host port 5000" Type="Port" Display="always" Required="false" Mask="false">5000</Config> </Container> my-Xpenology.xml