dmarshman

Members
  • Posts

    9
  • Joined

  • Last visited

Posts posted by dmarshman

  1. 18 hours ago, david279 said:

    Have you tried updating clover, WEG and Lilu?

    I've tried with multiple versions of [Unraid &] QEMU, Clover, WEG and Lilu over the past couple of months, **EDIT** and just verified I was using Lilu 1.4.0 and WEG 1.3.5.   I also use MacPro1,1 as the machine ID.

    I just updated to Lilu 1.4.1 and WEG 1.3.6, but end up with the same result [Metal/OpenCL not supported].  **/EDIT**

     

     

    Since the RX 580 works with no issues with my current setup, I was hoping someone else had a success story with Unraid and a Radeon VII in order to evaluate if it's worth the continued effort or a waste of time trying to get the Radeon VII working with Metal / OpenCL.

    If nothing else comes up, I'll try and put some time aside this weekend to see if I can make any progress...

     

    As an aside, both the RX 580 and Radeon VII work perfectly as eGPUs using an external Thunderbolt 3 enclosure [Sonnet] with MacOS Catalina [10.15.0->.2] on a real Mac [2018 Macmini].    [Vanilla MacOS - no kexts, or anything].

  2. Wondering if anyone successfully has a Radeon VII working with Metal / OpenCL acceleration in Catalina 10.15.2, or anyone has any advice?

    I have no graphics acceleration, and it is not recognized for Metal / OpenCL by GeekBench 4 or 5.  

     

    I can successfully pass through the Radeon VII, it's recognized by the MacOS VM, is listed correctly with 16GB RAM in "About this Mac", and shows up as GFX0 using IORegistryExplorer.

    [I can post screenshots later if it helps].

     

    When I swap the Radeon VII with a Radeon RX 580 - same VM, with exactly the same MacOS install and Clover install/settings - then I get full metal acceleration, and it is recognized by GeekBench 4/5 as a compute capable GPU.

     

     

    For background, I'm using the latest version of SpaceInvaderOne's Macinabox docker to create the VM, and using the version of Clover, Lilu and WEG that is contains.

     

    System:

    UnRAID 6.8.1   |   ASRock Taichi X570  |  AMD Ryzen 9 3950X  |  64 GB RAM  |   GFX#1: Nvidia RTX 2070  |  GFX#2: XFX Radeon VII

     

     

  3. 16 minutes ago, Eksster said:

    I can't find "/boot/config/vfio-pci.cfg".

    I'm not familiar with this. Below you can see I'm doing a similar thing with "vfio-pci.ids=1106:3483,14e4:43a0".

    using device ID's rather than IOMMU groups as you say. Though, the in the second pic for one of my cards there it shows IOMMU group#, then Device ID, then the number set you're referring to.

    However it's called, I don't see where I can implement what you say. I have Krusader and can peruse my file system. At the root there is a boot folder but it is empty. No where else in Unraid do I see boot. 

    Also I can get as I said earlier my cards to run simultaneously now. What do you think this bind function might do? And is it different than the device ID "leave me alone Unraid!" command as I have shown in my "Unraid OS" section in green below. 
    I'm about to reboot after kicking out 2 device ID's that were there- that I don't now see in my device list anymore. My video off was done as you say but didn't behave as you said, so the only thing I know to try is to see if knocking off those 2 non existent device id's will change anything. 

     

     

    What you are doing above should work OK.  I listed an alternative method [which is helpful if you have identical cards].

     

    If you want to try it - and I'm not sure you need to - then it's a text file that you need to create [in the /boot/config/ directory].

     

    Use the "shell" button in Unraid's UI to open a window, and then type:

    nano /boot/config/vfio-pci.cfg

     

    This will open [or create and open if it doesn't already exist] the file for editing:

    Add [based on your post above]:

    BIND: 11.00.0 11.00.1                     

     

    Then press <control>-X to exit, "y' to save, then <enter>/<return>, and then reboot...

  4. Quote

    Know anything about maximizing pcie slots and effective gfx card use? I really don't want a gfx card taken up by unraid. I'm uneducatedly nervous about pcie lane limitations on x570 with the 3900x

     

    I didn’t have any issues disabling the primary gfx card from being used by Unraid, and passing through both my Radeon VIIs to VMs.  I'll list exactly how I achieved it with my setup below:

     

    Looks like you are much more advanced with Clover tweaking than I am, and I may need some tips from you.  I reply in a separate message for that..

     

     

     

    Two steps for identical GPU pass through with neither being used by Unraid]:

    - #1 - disable GPU for unraid 

    - #2 - make them available to passthrough to VMs

     

    #1 - To disable the Unraid video:

     

    From the Main menu, click the name of your Boot Device (flash). Under Syslinux Config -> Unraid OS, add "video=efifb:off" after "append initrd=/bzroot".
    The line should now read "append initrd=/bzroot video=efifb:off".

     

    When you reboot you will notice there is no video output when unraid boots (you will be left with a freeze frame of the boot menu). Your primary GPU is now ready to pass through.

     

    #2 Then pass through (I use the following method - by IOMMU group - which works flawlessly for Identical graphics cards, but needs to be updated if you add / remove any devices to your system (as the values may be reassigned)).

     

    In following file:

    /boot/config/vfio-pci.cfg

    add a single line with “BIND:” followed by the IOMMU group(s) for your graphics cards (or any other PCI devices you want to pass through).

     

    Example - For my system -  with two Radeon VIIs: 

                                                        

    BIND=13:00.0 13:00.1 10:00.0 10:00.1

     

    [technically, I think the 2nd sub group - with the xx.xx.1 - is redundant/unnecessary, but I include it anyway as it doesn't hurt either].

     

    By doing both the above, I am able to run Windows and Ubuntu* VMs simultaneously with both having a passed through, fully accelerated GPU.

    (*and theoretically MacOS too, I just personally haven't got it working properly yet).

  5. 7 hours ago, Eksster said:

    Hey there nice build!
    I'm feeling a lot of parallels here but where I've got slightly lower spec'd items than you.
    I've got the Auorus x570 pro wifi with 3900X. 2 vega frontier cards and a quadro 4000. 32G 3600MHz CL16.

    Currently I have vega in slots 1 and 2, and the quadro in 3.

    1 vega for MacOS, 1 vega for Windows, quadro for unraid- webui and possibly HW acceleration for plex.

    When I have 1 vega running in a vm, if I turn on the other, it will shut off what's running!
    Do you know why or how to fix this? If macos is running and I start windows, it will shut down mac.

    I'm kind of freaking out about it.

     

    Also if you know how to optimize macos that would be cool. I've tried various smbios tweaks. ram always shows DDR3, and I feel like the benchmarking scores could be higher.

     

    Wow.  I love your repurposed / custom modified Mac case...  was it a PowerMac G5 or a Xeon MacPro?

    {I still have my 2008 Dual Quad Core 2.8 GHz Xeon MacPro… I haven't used it as my daily driver since 2016, and have not even turned it on in a couple of years, but it's still fully intact... maybe a project for another day}

     

    For my X570 computer, I upgraded to 64 GiBs of RAM [4 DIMMs running at 2800 GHz], and have two Radeon VIIs with a 1300W PSU, and currently I have it in a Thermaltake Core P3 [open] case.

     

    Re: the KVM machines and MacOS / gfx cards, I haven't experienced the issues you've seen, but have been mainly using the computer with Windows 10 on bare metal [i.e. without Unraid] over the past couple of weeks.

    I have the Radeon VIIs in PCIE Slot #1 and Slot #3.

    When using Unraid, I've been running a Windows 10 VM alongside a Ubuntu 18.04 LTS VM with no problems using two Radeon VIIs, but to be honest I gave up on MacOS after four days of experiments where I constantly failed to get any type of acceleration on the gfx card, and ended up corrupting/wiping the installation every few hours with failed boots. 

    I think the [Unraid 6.8 RC stream] reversion back to the 4.19.x kernel [from 5.x] made things less functional for me on MacOS, so I decided to wait for Unraid 6.9 [and latest versions of QEMU] before doing any more experiments.  Looks like 6.8.1 RC just dropped with updated virtd and QEMU, but still using the 4.xx kernel, so still waiting, but maybe I'll try again this weekend….  

    [I have my 2018 Macmini for day to day MacOS usage so not in too much of a rush].

     

    What type of Radeon Vega cards do you have - 56, 64 or VII?

    Please bear in mind that there is a know reset bug on the Vega 56 and 64 [I think now fixed in recent kernels], and also the Radeon VII [which is not yet fixed], which impacts virtual machines, that could be impacting your system.   It basically stops a VM from being restarted if it [the VM] is totally shut down... [you have to reboot unraid.  SpaceInvaderOne has videos about it...].

  6. PCI Devices and IOMMU Groups
    
    IOMMU group 0:	[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 1:	[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    IOMMU group 2:	[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    IOMMU group 3:	[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 4:	[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 5:	[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    IOMMU group 6:	[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 7:	[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 8:	[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 9:	[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 10:	[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 11:	[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 12:	[1022:1484] 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 13:	[1022:1484] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 14:	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
    	[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
    IOMMU group 15:	[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0
    	[1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1
    	[1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2
    	[1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3
    	[1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4
    	[1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5
    	[1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6
    	[1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7
    IOMMU group 16:	[1987:5016] 01:00.0 Non-Volatile memory controller: Phison Electronics Corporation Device 5016 (rev 01)
    IOMMU group 17:	[1022:57ad] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57ad
    IOMMU group 18:	[1022:57a3] 03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3
    IOMMU group 19:	[1022:57a4] 03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
    	[1022:1485] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
    	[1022:149c] 0a:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    	[1022:149c] 0a:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    IOMMU group 20:	[1022:57a4] 03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
    	[1022:7901] 0b:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 21:	[1022:57a4] 03:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
    	[1022:7901] 0c:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 22:	[1b21:1184] 04:00.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
    IOMMU group 23:	[1b21:1184] 05:01.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
    	[8086:2723] 06:00.0 Network controller: Intel Corporation Device 2723 (rev 1a)
    IOMMU group 24:	[1b21:1184] 05:03.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
    IOMMU group 25:	[1b21:1184] 05:05.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
    	[8086:1539] 08:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
    IOMMU group 26:	[1b21:1184] 05:07.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
    IOMMU group 27:	[1002:14a0] 0d:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Device 14a0 (rev c1)
    IOMMU group 28:	[1002:14a1] 0e:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Device 14a1
    IOMMU group 29:	[1002:66af] 0f:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 20 [Radeon VII] (rev c1)
    IOMMU group 30:	[1002:ab20] 0f:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Vega 20 HDMI Audio [Radeon VII]
    IOMMU group 31:	[1022:148a] 10:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
    IOMMU group 32:	[1022:1485] 11:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
    IOMMU group 33:	[1022:1486] 11:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
    IOMMU group 34:	[1022:149c] 11:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    IOMMU group 35:	[1022:1487] 11:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller
    IOMMU group 36:	[1022:7901] 12:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 37:	[1022:7901] 13:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

     

  7. 10 hours ago, righardt.marais said:

    Hi, I have now unsuccessfully tried for a week to get my gigabyte x5700xt card passed through to a win10 pro vm on my trial unraid.

    Hardware: 

    <snip>

    </snip>

    Many thanks

    Righardt

    If you are not already, try using the "Q35-3.1" machine, rather than the "i440fx-3.1" machine to which the "Windows 10" template defaults.

    That fixed it for me - [ASRock X570 Taichi, Ryzen 9 3950X and Radeon Vega VII] - in conjunction with the boot flag "video=efifb:off" outlined above.

  8. Per the title.. 

    Plan to use this as a workstation / high end gaming rig;  running both Windows 10 and MacOS Catalina in UnRAID VMs  [as well as Win 10 natively].

     

    I'll add a new thread in the full build section once I get the machine up and running.

     

    Couple of days playing around so far:

     

    Latest MBoard BIOS - 2.50  [2019/11/13]

    Running in UEFI

    Stock clocks, voltages, etc for 3950X & RAM.

    32 GiB RAM   :   G.Skill Ripjaws V 32GB 2 x 16GB DDR4-3200 PC4-25600 CL16 Dual Channel Desktop Memory Kit F4-3200C16D-32G

     

    Hoping/Planning to go to 64 GiB RAM and [maybe] 2x Vega VIIs

     

     

    Initial testing:

     

    VM#1 :   MacOS Catalina

    - using SpaceInvader's Docker / template, plus some tweaks    

    - 8 cores / 16 threads - 16 GiB RAM

    - Vega VII passed through in PCI-E slot #1  [PCI-E 3x16]  -    - no hardware acceleration / Metal / OpenCL yet

    Cinebench R20 - 4850 [Multicore]


     

     

    Native Win 10:

    - 1 TB PCI-E 4 4x NVMe Drive

    - 16 cores / 32 threads - 32 GiB RAM

    - Vega VII  [stock everything]

    Cinebench R20 - 9454 [Multicore]

     

    • Like 1