fortytwo - unRAID X399 build


Recommended Posts

fortytwo: the answer to the meaning of life, the universe, and everything.

 

OS: unRAID 6.5.3 Plus

CPU: AMD Threadripper 1950x 

Motherboard: ASRock X399 Taichi  

RAM: 64GB @ 3200MHz G.Skill DDR4 Ram - F4-3200C14Q-64GVK  

Case: Fractal Define R5 (non-window)  

Power Supply: EVGA SuperNova 1200W P2  

GPU: AMD HD5450 (OS)  link,  Nvidia GTX1060  

CPU Cooling: Noctua U14S TR4-SP3 (changed to chromax black and additional U14 fan)  

Fans: Fractal Define 2x 140mm in front, 1x 140mm rear, 1x140mm side  

Parity Drive: 4TB WD Black  

Data Drives: 5x WD Red 4TB  link (Array)   WD Blue 2TB  link, WD Blue 4TB   (Unassigned disks)

Cache Drive: 2x Samsung 960PRO 512GB SSD Cache Pool (RAID 0)  

Total Drive Capacity: 27TB Total Capacity - 20TB Array Storage

 

IMG_20180217_141613.thumb.jpg.003403d19a68bbdedb86420706ef25e5.jpg

 

This is a major upgrade from my first revision unRAID build which was an x58 running a x5670 6 core (+HT) which has been running for a year.

That system consolidated hardware from 2-3 cheapo pcs and gave me my first taste of what I could do with unRAID and my data.

It was a great system and has actually been retired to HTPC duties (will do a build thread on that once I move back to unRAID from it's current baremetal LibreELEC).

 

The purpose of moving to TR was to further consolidate hardware, create a suitable homelab/testing environment for work & uni as well get up and running with VMs dedicate to a task rather that the current all in one gaming/development/study box... bit of a jack of all trades, so it was specced as such and hasn't let me down.

 

unRAID 6.4.0 was a big turning point for Zen based processors, as it was the first stable version to include a kernel version with the necessary fixes in place.

Seeing the pieces fall together, I pulled the trigger on this build and spent a fortnight running stability testing etc.

After hearing previous stories of the difficulties faced by TR users, the only system changes I made was:

  • Update to latest BIOS (2.00)
  • Enabled virtualisation settings in BIOS
  • Boot order/devices
  • Fan profiles (nice and quiet.. system runs cool)
  • Enabled XMP 3200MHz profile
  • ACS Override

 

Note: I do not have zenstates enabled in unRAID and have had no stability issues 3 months and counting, but your mileage may vary.

 

I'm yet to fully migrate my desktop and components of the the unRAID build, mostly as I have been busy with work and part-time studies kicking back in... so updates still to come.

 

Eventually I'd like to move to a larger case with support for drive caddies, possibly also (avert your eyes children!) water cooling.
At this stage CaseLabs is the lead contender (price is ouch in Australia), I'm also considering a custom solution.

Edited by tjb_altf4
  • Like 1
Link to comment

Very nice build, almost exactly what I'm going to do but I'm waiting another month or two to see if there is a new TR chipset like there was with the x470.  I'd love to get a TR Taichi like the x470 Prime with 10GBE onboard.  This will also let me ditch the HBAs as the board will have enough SATA ports 

 

On the case front, there was a great deal for the PC-D600 in the US a month ago so I picked that up for my new build...I /love/ the case, I can fit anything I want in it...but it is a little large being double wide :).  Have 3 3-in-5 trayless istars in it for 15 hot swap disks, with the TR I'll move Cache to NVMe along with VM SSD.  

 

One question, can you confirm the ASRock BIOS lets you set the LED color, or at least turn the LED on the MB off?

 

 

Case Link: http://www.lian-li.com/pc-d600/

Link to comment

 

7 hours ago, Tybio said:

One question, can you confirm the ASRock BIOS lets you set the LED color, or at least turn the LED on the MB off?

Yes, you can change colour, or do patterns such as pulsing, breathing etc, I played around with it for a while.

I chose to leave it on white to help with visibility when troubleshooting in the case, 99% sure you can turn it off completely if desired.

Link to comment
  • 2 months later...

So I was seeing some stabllity issues with a VM I imported from Server 2016 VMware that looked like c-state issue, so I've now enabled zenstates and the problem seems to have gone.

That said my other Server 2016 I created never had issues so I think it can be dialled out in power settings in Windows, and none of my other Win10 or linux vms I created have had issues.

 

Anyway, I had a request for my IOMMU groupings, so I've dumped here.

Note that ACS override is enabled in the BIOS.

 

IOMMU group 0:	[1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 1:	[1022:1453] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
IOMMU group 2:	[1022:1453] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
IOMMU group 3:	[1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 4:	[1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 5:	[1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 6:	[1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 7:	[1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B
IOMMU group 8:	[1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 9:	[1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B
IOMMU group 10:	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59)
[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
IOMMU group 11:	[1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0
[1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1
[1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2
[1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3
[1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4
[1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5
[1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6
[1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7
IOMMU group 12:	[1022:1460] 00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0
[1022:1461] 00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1
[1022:1462] 00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2
[1022:1463] 00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3
[1022:1464] 00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4
[1022:1465] 00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5
[1022:1466] 00:19.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6
[1022:1467] 00:19.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7
IOMMU group 13:	[1022:43ba] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller (rev 02)
[1022:43b6] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset SATA Controller (rev 02)
[1022:43b1] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset PCIe Bridge (rev 02)
[1022:43b4] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[8086:1539] 04:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
[8086:24fb] 05:00.0 Network controller: Intel Corporation Dual Band Wireless-AC 3168NGW [Stone Peak] (rev 10)
[8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
IOMMU group 14:	[144d:a804] 08:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961
IOMMU group 15:	[1022:145a] 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a
[1022:1456] 09:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor
[1022:145c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller
IOMMU group 16:	[1022:1455] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455
[1022:7901] 0a:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
[1022:1457] 0a:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller
IOMMU group 17:	[1022:1452] 40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 18:	[1022:1453] 40:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
IOMMU group 19:	[1022:1453] 40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
IOMMU group 20:	[1022:1452] 40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 21:	[1022:1452] 40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 22:	[1022:1453] 40:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
IOMMU group 23:	[1022:1452] 40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 24:	[1022:1452] 40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 25:	[1022:1454] 40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B
IOMMU group 26:	[1022:1452] 40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 27:	[1022:1454] 40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B
IOMMU group 28:	[144d:a804] 41:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961
IOMMU group 29:	[10de:1c03] 42:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1)
[10de:10f1] 42:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)
IOMMU group 30:	[1002:68f9] 43:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cedar [Radeon HD 5000/6000/7350/8350 Series]
[1002:aa68] 43:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cedar HDMI Audio [Radeon HD 5400/6300/7300 Series]
IOMMU group 31:	[1022:145a] 44:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a
[1022:1456] 44:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor
[1022:145c] 44:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller
IOMMU group 32:	[1022:1455] 45:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455
[1022:7901] 45:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

 

Link to comment

Cheers mate. Looks like I should go Tai Chi direction with my build too.

 

  1. For your 2 GPUs, did you have 1 for unRAID and the 1060 for VM? or both passed to VM?
  2. The Samsung B-die RAM thingie, that only relates to running overclocked right? I plan to run my RAM at stock but just want to make sure.
  3. Did you add multifunction option to your syslinux acs override?
Edited by testdasi
Link to comment
34 minutes ago, testdasi said:
  1. For your 2 GPUs, did you have 1 for unRAID and the 1060 for VM? or both passed to VM?
  2. The Samsung B-die RAM thingie, that only relates to running overclocked right? I plan to run my RAM at stock but just want to make sure.
  3. Did you add multifunction option to your syslinux acs override?
  1. I have run various configs which all worked including single card which was then passed to VM.
    Current config is gpu for unraid and gpu for VM due to being a little simpler to implement and also having the spare slot + gpu at the moment.
  2. Just stick to vendor recommended RAM and you'll be fine.
    However higher speed RAM has noticeable performance improvement on ryzen/tr and b-die is known to be more stable at these higher speeds (for which they are rated).
  3. No, wasn't needed for my purposes, but may be needed if you want to pass some of the onboard devices
Link to comment

Do you have the 2GPU the same way you have in the pic?

i.e. GTX 1060 in first (primary) slot and HD5450 in 4th (secondary) slot?

 

I have never had 2 different GPU in the same machine so have never managed to figure it out.

Does unRAID only use the GPU in the primary slot? If the primary GPU is passed to a VM, does unRAID automatically switch to using the secondary GPU?

 

Otherwise then basically the secondary GPU only purpose is to extract the rom of the primary one.

I'm kinda reluctant to waste the primary PCIe slot on a cheapo GPU just for unRAID - especially when there are PCIe x1 GPU out there.

Would be nice to be able to mount a 1070 on a true x16 slot + a cheapo GPU on the x1 slot and leaving both x8 slots + 1 true x16 slot for other purposes.

Link to comment

No, that picture was a single GPU setup when I first setup the server after upgrading, the bottom card was an m.2 card which was my cache drive in my previous setup (now using the 960 pros)

unRAID uses what is set to be primary GPU in bios, if you pass that GPU it loses the card and I'm not aware of a way to reinitialize it (that would be handy!).

However if your not using the GUI boot mode, it's probably not a major issue.

note: some bios have the ability to allocate a different primary gpu, but it's not common and I've not seen it on the ASRock bios.

 

You don't need to extract the rom, there are ways around that and they have worked for me (spaceinvader has a good youtube video on the topic).

Even if you do extract, it only needs to be done once then you can remove the other card.

 

Link to comment
  • 2 weeks later...

I like the build- mine is pretty much built as a 4 gamer 1 pc setup but will have several machines for my home lab that will get spun up on demand.  Maybe you could give me some feedback- or warn me of the gotchas you ran into.

 

 

 

The plan:

Will run 6 machines if you include the host. 

1x Unraid- 2 CPU cores 4HT 2Gb RAM

1x Linux game server multi-host w/steamcache- 2 cores 4HT 8-12Gb RAM

4x windows 10 gamers- 2 cores 4HT each 12Gb RAM each

6x windows server test bench (only loading on demand)

 

The hardware:

Corsair 740 Air case

Asrock Taichi X399

Threadripper 1920X

Corsair 64Gb 8x8Gb 2933Mhz

Evga CLC 280

Evga 1Kw gold PSU

4x Evga GTX 960 SSC 4Gb (had them already- probably upgrade to 2XXX series eventually)

Allegro pro 4x USB 3.0

2x Plextor 512Gb SSD

2x WD purple 3Tb

Other random drives 2x250 SSD 2x2Tb HD

 

We aren't a finicky bunch- 1080p gaming with audio coming through our monitors is usually enough for us.  We play tons of different titles- old and new but usually avoid most of the AAA stuff until it's hit a steam sale.

Favorites:

7 days to die, dying light, TF2, No man's sky, Ark, borderlands games, L4D2, Warhammer Verminitde, Shadow warrior 2, far cry X, ect.....

 

Thoughts and advice appreciated- I have all my hardware except my CPU- (comes Wednesday)

 

 

 

Link to comment
On 8/29/2018 at 7:37 PM, jordanmw said:

The hardware:

Corsair 740 Air case

Asrock Taichi X399

4x Evga GTX 960 SSC 4Gb (had them already- probably upgrade to 2XXX series eventually)

Allegro pro 4x USB 3.0

Problems:

  • The case has 8 expansion slots, your 4 GPU will occupy all 8. You won't have space for the USB card. You either need a new case or some creative modding.
  • The 2nd GPU will cover the middle PCIe slot so you will need creative use of PCIe extender(s) to make it work.
  • The Taichi X399 middle slot is PCIe x1 (albeit with open end) so I'm not sure your 4-controller PCIe x4 USB card is going to work in that slot. Theoretically it will just be slower.

Theoretically, you need a case with at least 10 expansion slots: GPU1 (x2) - extender in - USB - GPU2 (x2) - GPU3 (x2) - GPU4 via exender out (x2).

It's still not going to be easy to (a) get an extender to stretch over 5+ slots over 2 big GPUs and (b) your GPU3 + 4 will completely cover all the ports at the bottom of the board so access is going to be a massive pain.

 

Also perhaps consider a Gigabyte board since it has full-length middle slot. I would also recommend opting for compact GPUs but it looks like you guys are reusing your existing stuff so that's not an option then.

 

 

 

You might want to think simpler.

  • There's no need for a separate USB controller if hot-plugging isn't a requirement. You can pass through individual USB devices to unRAID. If you all use exactly the same model of peripherals, it's going to be a massive pain to identify things and edit xml but should still work.
  • The motherboard has 2 separate USB 3.0 controllers that can be pass-through to VM (in addition to a shared USB 3.1 controller that can be used for individual USB devices pass-through). So if you can live with just having 2VMs with dedicated controllers and 2 VMs with no hot-plug (preferably the 2 with distinctly different peripherals) then that simplifies things.

In short, take the USB controller out of the question and the build might just work.

Edited by testdasi
Simplify things
Link to comment

I did get a pcie extension cable and plan on mounting it on the PSU side of the case.  I have tested fit and it looks like it will stretch just far enough.  Not really worried about speed- just hot plug for the VMs.  I guess if it doesn't work- then I go with your suggestions of getting rid of it and just using the onboard controllers and assigning devices.  I didn't even think about that slot only providing a 1x pcie- thought it was 4x.  I'm not terribly concerned with access to the board. Thanks for the input- I'll let you know how it goes.

Link to comment
  • 4 weeks later...

So decided to get down to a bit of maintenance over the weekend, as is typical I had a couple of small headaches.

 

Upgraded the bios from v2.00 > v2.30 (bridging) > v3.30 

For some reason v3.3.0 disabled my backpanel usb3 ports from being detected at boot time, so unraid flash was moved to front panel usb2... grrrr I'll fix that up later.

 

Got into unRAID and updated from v6.5.3 to v6.6.3

Other than some unrelated DNS adventures, upgrade went off without a hitch and I've noticed some issues I was having with VNC have gone away, happy days.

VM performance seems to be better in 6.6.x, so that's a win :)

Link to comment
  • 3 weeks later...

Hi there very nice builds guys. 

 

I pulled the trigger on a new rig a month ago and am currently waiting for all the parts to arrive (I live in New Caledonia so delivery's quite a long process)

 

I went with :

 

CPU : 2950x

MotherBOard : MSI MEG Creation X399

RAM : 64Gb Gskill

GPUs : RTX 2070 / GTX 1070 / RX580

PSU : corsair RX1000w

Case : Level 20 GT

SSDs : m.2 970 evo + 1x500gb 850evo + 2x480Gb Kingston

HDD : 4x 2tb barracudas

 

I'm wondering what's the best configuration to set the cache pool and drive in order to maximize VMs speed and efficiency, the principal purpose of this build is to run 1 W10 gaming vm, a bunch of docker for media center kodi..., 1 Fedora VM, 1 Mojave Vm and a bunch of other linux VM for testing, work and school. And how I should configure everything for my vms to each run from an ssd and store their files and media on the HDDs is not really clear to me yet. 

 

Edited by gros319
Link to comment
  • 3 months later...
On 11/16/2018 at 10:50 AM, jordanmw said:

Finally got my issues solved- since I had no slots for an extra USB card- I used a U.2 to plx PCIEX4 slot and got an allegro 4 controller card.  Now I just need to pass one controller through to each machine. Had some bios changes that allowed me to enable it with unraid and I am off to the races!

Can you link the U.2 to PCIEX4 riser or whatever you purchased? I am about to work on a build almost exactly like yours. Just needed to solve the 4 separate controllers for the 4 gaming VMs. And did your case have enough expansion slots for the USB controllers?

Link to comment
10 hours ago, ehftwelve said:

Can you link the U.2 to PCIEX4 riser or whatever you purchased? I am about to work on a build almost exactly like yours. Just needed to solve the 4 separate controllers for the 4 gaming VMs. And did your case have enough expansion slots for the USB controllers?

Answered privately, but here for the forum:

https://www.microsatacables.com/u2-sff8639-to-pcie-4-lane-adapter-sff-993-u2-4l

I had a corsair air 740 which does have a small front 2.5 bay location that I adapted by USB card to fit into.  

  • Like 1
Link to comment
  • 2 months later...
3 hours ago, trl002 said:

Sorry to bother you but where in the bios of asrock did you find "ACS Override" looked all over or did you mean in vm manager in unraid?

Only asking as I'm not getting any sound on my onboard audio.

Thanks

Within Unraid itself in VM settings, in the new GUI it's set to the Downstream option.

I haven't used onboard sound on this setup, so I can't confirm whats needed, but maybe double check what your default audio device is set to in your VM's OS..

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.