Jump to content
TOoSmOotH

X399 and Threadripper

94 posts in this topic Last Reply

Recommended Posts

Update: The issues with launching my Steam games appear to be purely due to software configuration w/ my new Windows 10 install trying to use my already-installed Steam games. Nothing to do with Threadripper or the ugly patched kernel.

 

I installed the old DirectX runtime from http://www.microsoft.com/en-gb/download/details.aspx?id=8109 because when trying to manually launch Doom outside of Steam I got an error saying that xinput1_3.dll was missing. Doom now works in normal and Vulkan (and looks oh-so-pretty at 4k60). Bioshock Infinite and Descent: Underground were also working after the DirectX install.

 

When trying to launch Sonic Adventure 2 though, same behavior as the others. I launched it outside of Steam and got a generic error, but Windows helpfully popped up a message saying that I needed to install .NET Framework 3.5. I'm guessing that'll fix some of the other Sonic games I haven't tried yet as well.

 

FWIW, I did first try uninstalling and re-installing the games through Steam with no luck. I probably should have just started from scratch on the installs. :-/

Share this post


Link to post
17 hours ago, gilahacker said:

 

I got mine on sale for $700. Haven't seen it go on sale again since.

 

Yea - memory prices are up. Too bad, but probably means that computer purchases are up so demand is up since AMD and Intel are at it again and actually raising the bar on performance. Spectre may dampen that a little - but I expect Intel and AMD will do VERY well selling their next generation chips that I'm sure they'll claim has been redesigned to be free of Meltdown and Spectre vulnerabilities. Anyone with a computer in their basement from 1990s/2000s still using it for keeping their books or doing email is going to be wanting an upgrade.

Share this post


Link to post

Made a write up over in the 6.4.1 stable thread related to the below issue, but thought a recap would be better here since this thread seems more in line with the issue.

 

So I built my Threadripper rig and initially had issues with gpu passthrough not working at all on 6.4. Luckily for me, 6.4.1 came out literally the same night. My lower slot GPU, regardless of which one I use (960 or 1060) works 100%. Can play games and stream over Steam In-home streaming with no issues on a Win10VM. However, the upper slot (slot 1 or 2, I havent tried 3 for this) defaults back to the same address regardless of which physical slot it is in. When you try to pass through that GPU (again, does not matter which one) you get the error below.

"

Feb  4 02:01:02 BEAST kernel: vfio-pci 0000:42:00.0: BAR 3: can't reserve [mem 0xc0000000-0xc1ffffff 64bit pref]

"

Sometimes the VM will come up but be locked into 640x480 or 800x600 res. Or it might just lockup the entire server.

 

 

The only differences between this rig and the Home Server listed in my sig are:

unRAID 6.4.1 / MB: MSI X399 SLI Plus - Latest BIOS update / CPU: 1920x TR4 / RAM: 24GB Kingston Non-ECC DDR4 (1 dimm out for preclearing a different system) / and the 1060 listed above.

 

Is anyone else running into this?

 

beast-diagnostics-20180204-0233.zip

Edited yesterday at 03:15 AM by ryoko227 
Adding diags and some more info

Share this post


Link to post
2 hours ago, ryoko227 said:

However, the upper slot (slot 1 or 2, I havent tried 3 for this) defaults back to the same address regardless of which physical slot it is in.

Out of curiosity have you tried, for testing, removing your nvme and then testing the video on slots 1 and 2? I'll admit it looks separated and your motherboard user manual doesn't mention anything. However, I've seen these linked on other boards, and given your issue, this is my two-cents of diagnostics. 

 

I must admit I haven't tried to move my graphics cards around yet, still need to find a second card anyhow. 

Share this post


Link to post
On 2018/2/5 at 7:08 PM, Jcloud said:

Out of curiosity have you tried, for testing, removing your nvme and then testing the video on slots 1 and 2? I'll admit it looks separated and your motherboard user manual doesn't mention anything. However, I've seen these linked on other boards, and given your issue, this is my two-cents of diagnostics. 

 

I must admit I haven't tried to move my graphics cards around yet, still need to find a second card anyhow. 

 

It’s interesting that you mention that. I took your idea and jumped down the rabbit hole with some interesting, albeit unfruitful results. I started looking into NVME / PCIE conflicts and such which lead me to try and update the firmware of the NVME drive. When I tried to load the USB it would kick out this error and stop the flash program.

 

image.thumb.png.f39a5408e82d66028b08a0f486db0f71.png

 

So just to sanity check it, I put the NVME back into my x99 rig to try an update it there. No issues, took 14.6 seconds and updated successfully. Also note that I tried the x399 without the M.2 card in, and still got the same errors starting a VM as previously. 

  

After updating the NVME I decided to put it back into the x399 trying a different M.2 slot. Making note that both times, UEFI BIOS didn’t recognize anything being in either M.2 slot 1 or 2. I know older BIOS’es would only display the NVME if it was in fact a SATA M.2. So it’s unclear at this point if this is normal behavior or not. But I found it odd that it’s stating “Not Present”. (EDIT/Update - MSI contacted me and notified me that this is normal behavior)

 

image.thumb.png.5759b93726c635c0df331af8df63560e.png

 

When trying to load the VM that is passing through the upper slot GPU, I still get the original error, but noticed there is also a PCIE bus error above that (may or may not be in my previous diags). I saved the current log and diags from last night, but they are at home. I will attach those when I get home tonight. I also tried plugging in the addition PCIE bus power 6-pin connector for giggles, but that was also ineffective.

 

When I get home tonight, I will try the system with no 2ndary GPU. Starting by leaving the card in PCIE slot 3 since that currently passes through no problem. If that works try using both cards, but in slots 3 and 4. (I’m not counting the small slots as they are unused). If that works, then try a single card in slots 1 or 2. I also haven’t tried out the 3rd M.2 slot, so I may try the NVME down there as well. The system is also running with only 3 DIMMs in, so I'll put in the 4th and make sure there isn't something screwy going on about that. I will probably also write MSI today and ask them about this. Am I correct in assuming that most people can now succefully pass through an nvidia GPU in the primarily slot, using Threadripper, and unRAID 6.4.1?

Edited by ryoko227
added note about 3 DIMMS in, MSI update

Share this post


Link to post
1 hour ago, ryoko227 said:

Am I correct in assuming that most people can now succefully pass through an nvidia GPU in the primarily slot, using Threadripper, and unRAID 6.4.1?

I am, I think Threadripper is about ironed out, and Ryzen is fast approaching.  I'm on stable, just haven't updated the page.  However, as I mentioned before, I haven't tried gpu #2 yet.

 

Sorry my suggestion was of no help. Is your M.2 a PCIe,? If so perhaps it's listed under the pci devices and/or storage.  If it is a sata-M.2 that is definitely odd behavior, considering it's listed entry on that sata page.

 

Good luck sir.

 

Edited by Jcloud

Share this post


Link to post
6 hours ago, Jcloud said:

I am, I think Threadripper is about ironed out, and Ryzen is fast approaching.  I'm on stable, just haven't updated the page.  However, as I mentioned before, I haven't tried gpu #2 yet.

 

Sorry my suggestion was of no help. Is your M.2 a PCIe,? If so perhaps it's listed under the pci devices and/or storage.  If it is a sata-M.2 that is definitely odd behavior, considering it's listed entry on that sata page.

 

Good luck sir.

 

 

Oh, well that is both good and bad news then. Good in that it is confirmed to work for people. Bad in that it might seem I have other problems to deal with, lol. :)

 

No no no, I don't think your suggestion wasn't helpful, quite the contrary. The fact that the M.2 couldn't be updated pre all the virtualization stuff is leading me to believe that there is probably a deeper hardware issue at hand. It's speculative at the moment as I still have some card swapping tests to do, but I'm thinking there might be an issue with the upper slots. It might also imply that the M.2 slots are tied to those upper slots as you suggested. My current thinking is that the MB might have an issue, though tonight's testing should help show or rule that out (I hope). I will also check if the NVME is listed under PCIE devices and if its selectable as a boot device as well when I get home tonight.

 

EDIT - put in 4th DIMM, no change. Current syslog and diags. Gonna try pulling the top slot PCIe and putting it into slot 4 next, see what happens. There is no PCIe devices list or storage beyond the image I showed. Also, the NVME drive doesn't show up as a boot option in either UEFI or LEGACY+UEFI boot modes, which leads me to believe the BIOS doesn't even see it.

beast-syslog-20180206-1920.zip

beast-diagnostics-20180206-1921.zip

 

 

EDIT2 - With a GPU in PCI_E1 (slot 1) and PCI_E4 (Slot 3) I get the error message below when starting the associated VM..

 

Feb  6 19:20:00 BEAST kernel: vfio-pci 0000:42:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
Feb  6 19:20:00 BEAST kernel: pcieport 0000:40:03.1: AER: Multiple Corrected error received: id=0000
Feb  6 19:20:00 BEAST kernel: pcieport 0000:40:03.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=4019(Transmitter ID)
Feb  6 19:20:00 BEAST kernel: pcieport 0000:40:03.1:   device [1022:1453] error status/mask=00001100/00006000
Feb  6 19:20:00 BEAST kernel: pcieport 0000:40:03.1:    [ 8] RELAY_NUM Rollover    
Feb  6 19:20:00 BEAST kernel: pcieport 0000:40:03.1:    [12] Replay Timer Timeout  

 

After I pulled the upper most GPU (960) from the machine. I still get full POST and unRAID startup log displayed to the screen even though the only other card is in slot 3. I also no longer get the above error at VM startup, but the vfio-pci 0000:0a:00.0: BAR 3: can't reserve [mem 0xe0000000-0xe1ffffff 64bit pref] error spams across the syslog when starting the associated VM.

 

Edited by ryoko227
Added information, current diags, further test info....

Share this post


Link to post

<SOLVED> 

vfio-pci 0000:0a:00.0: BAR 3: can't reserve [mem 0xe0000000-0xe1ffffff 64bit pref] 

vfio-pci 0000:42:00.0: BAR 3: can't reserve [mem 0xc0000000-0xc1ffffff 64bit pref]

 

TL;DR version:

efifb is being loaded into the area of mem that is trying to be reserved by the GPU. Having bios and unRAID load into Legacy mode and not UEFI keeps this from reserving the mem location and allows the GPU to pass-through correctly. 

 

 

Long Version:

After trying everything listed above in the previous posts, I decided to pull the whole system apart and re-seat everything. This had one benefit of correcting the AER: Multiple Corrected error received and PCIe Bus Errors. I think I had also over-tightened some screws (namely the AIO cooler and MB). After this I tried using the different paired DIMMs to see if there was some sort of a conflict there. No joy.

 

After kind of having given up and just searching randomly for "BAR can't reserve", I came across this post [Qemu-devel] vfio-pci: Report on a hack to successfully pass through a b. In the author's situation, Robert Ou, discussed how the BOOTFB framebuffer was being placed in memory where the GPU was also trying to reserve during pass-through. I ran a cat /proc/iomem and sure enough there was efifb sitting right where my GPU was trying to reserve. I figured that efifb probably meant EFI Framebuffer and decided I would try running unRAID in Legacy mode to see if it would keep the efifb from loading into that same piece of memory, and yay me, it worked!! The VM loaded up with no issues, and immediately jumped out of that crabtastic 800x600 into glorious 4k!

 

Also a nice piece of info to note, is after having loosened the over-tightened screws on everything. The nvme drive shows up in some of the boot options now, yay!

Edited by ryoko227
detail correction

Share this post


Link to post
On 2/5/2018 at 8:47 PM, ryoko227 said:

Feb  6 19:20:00 BEAST kernel: pcieport 0000:40:03.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=4019(Transmitter ID)

Last night I tried adding an HD6970 (only spare presently) and I think I got a similar error -- didn't record it, and then later removed card since it didn't work. But I think I had that error, I'll try to check it out after work (getting a CRC error on parity after moving cables anyhow, so that needs to be swapped). 

 

Way to stick with it. VM's are a tinker's dream-boat.

 

Edit:  (2/8 or 9)

I do in fact have the same error. BIOS CSM was configured for (EFI with legacy support). When I set my boot to EFI only, and my security keys cleared to allow non-secured efi boot, this resulted in an error with syslinux boot loader. Switched to legacy mode, unRAID booted nominal, the error is gone in my second gpu vm-test, but still no video. I haven't tried to use this card before, in a virt hardware is KGB, so not a great test case. Also, based on my results, this makes me wonder if I have always been booting on the legacy side of boot loader? Or it doesn't matter from boot loader's pov?

 

Attached are logs on error case.

hydra-diagnostics-20180209-0020.zip

 

Edit2 (2/10):

Switched back to EFI with legacy support in BIOS, I'm finding my Windows 10 guest is crashing after leaving it idle for long periods of time (> 5-6 hour range) it happened twice, after changing the BIOS to legacy only mode, where prior the guest was solid during ugly patch era. I'm guessing there's a component, or variable(s), Windows wants from EFI BIOS that's just not getting it straight legacy mode. This makes sense to me, as WIndows on iron, only true bios Windows 10 systems I see at work are ones which have been upgraded from W7 and some W8.  Based on observations, but inconclusive, bios firmware is still a bit half-bake (or micro-code? phasing out of my realm of expertise).

 

Follow up:

Changed my BIOS to EFI only; changed /boot/EFI- to /boot/EFI and I was able to boot into unRAID.  In EFI only, my primary GPU was 800x600 in vm. Windows Device manager reported the GTX1070 with error 43 (problem with device) - driver not loaded, which accounts for the resolution symptom. In this state I checked the log for running VM and qemu had an entire page of same warning.  In haste, I forgot to make a screen capture of the qemu warnings. I think @ryoko227 and I are seeing the same thing.  Interesting bit is ryoko227 saw this on his secondary, I saw this in my primary (with EFI only); so I'm thinking this is rooted in current TR bios or micro code for EFI and UEFI, as Legacy and UEFI seems to be ok.

Edited by Jcloud
Follow up, also have reported error. Neet-O

Share this post


Link to post

I have been watching the updates to the x399 virtualisation and unRaid, felt it was time to pull the trigger.

Build my Threadripper based unRaid system over the weekend after planning for quite some time, still sitting on my test bench for long term stability.

Not used it in anger yet, however running latest unRaid build & latest bios (Asrock Taichi) I'm up and running.

 

So far tested an old HD5450 which could pass thru (host/primary card) no issues (win10) and the swapped it out for a GTX1060  (host/primary card), which worked using the pass rom file method.

 

 

 

Share this post


Link to post

Running the following setup with 6.4.1 :

MB: ASUS ROG Zenith Extreme
CPU: AMD Threadripper 1950X
RAM: 64 GB HyperX Predator
HDD: 2 x 6 TB & 1 x 8 TB WD Red for Raid
SSD: 2 x 512GB M.2 Samsung 960 Pro for Cache

GPU Slot 1: ASUS Radeon R5 230 for UnRaid Host

GPU Slot 2: GTX 950 (passthorugh)

GPU Slot 3: GTX 1080 (passthorugh)

 

Running:

2 x Windows 10 with GPU patthrough

5 x Windows Server 2016

Docker: Plex, let's encrypt, Delunge,...

 

This is my setup and it runs more or less stable wich 6.4.1. I get an instant crash when I try to test cpu performance with passmark performancetest and one crash while the VM was idle (I experienced strange graphic errors afterwards so I think it maybe had something to do with the GPU. > Maybe was related to screen power save which I disabled afterwards ). I've unistalled passmark performancetest afterwards.

The RAM are running on 2000 MHZ as I wasn't able to start up with 3333 but i haven't tried to change this for a while and with the new BIOS Version.

 

I'm not able to passthrough any USB controller as they are all paired with other devices which won't work together.  And there seems to be not driver for the Mainboard Network adapter and it is only recongized as a 100 MB device. But the PCI (Running in Slot 4) that comes with the MB works at 1 Gb.

 

Happy to help anyone who has questions as far as I can, as I'm still a noob with unRaid :) .

 

Share this post


Link to post
On 2/11/2018 at 8:56 PM, tjb_altf4 said:

I have been watching the updates to the x399 virtualisation and unRaid, felt it was time to pull the trigger.

Build my Threadripper based unRaid system over the weekend after planning for quite some time, still sitting on my test bench for long term stability.

Not used it in anger yet, however running latest unRaid build & latest bios (Asrock Taichi) I'm up and running.

 

So far tested an old HD5450 which could pass thru (host/primary card) no issues (win10) and the swapped it out for a GTX1060  (host/primary card), which worked using the pass rom file method.

 

So far so good, but have been running at stock memory speeds up until this point.
Have now started testing my memory kit at it's rated speed of 3200mhz and so far so good with memtest overnight and valley benchmark (Win10 + 1060) running thoughout the day.

Will continue testing tonight.

 

 

Edited by tjb_altf4

Share this post


Link to post

Welp, just got back from vacation and started to get the AER error again..

 

Feb 18 12:15:16 BEAST kernel: pcieport 0000:00:01.1: AER: Corrected error received: id=0000
Feb 18 12:15:16 BEAST kernel: pcieport 0000:00:01.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0009(Receiver ID)
Feb 18 12:15:16 BEAST kernel: pcieport 0000:00:01.1:   device [1022:1453] error status/mask=00000040/00006000
Feb 18 12:15:16 BEAST kernel: pcieport 0000:00:01.1:    [ 6] Bad TLP 

Did some quick searching and found this post "Threadripper & PCIe Bus Errors"

 

Seems (as I havent gotten the error since) that a current work around is to toss pcie_aspm=off into your syslinux configuration.

For reference here is a portion of my current syslinux config. I have certain cores blocked off from unRAID using isolcpus, and using rcu-nobcs based off of one of @gridrunner video or post that I forgot which, and the pcie_aspm from the post above.

 

label unRAID OS
  menu default
  kernel /bzimage
  append pcie_aspm=off rcu_nocbs=0-23 isolcpus=2-11,14-23 initrd=/bzroot

 

Share this post


Link to post

Any of you Threadripper owners managed to pass through an onboard USB controller? If so would appreciate an IOMMU listing and note of which one you managed to pass through?!

Share this post


Link to post
10 hours ago, methanoid said:

Any of you Threadripper owners managed to pass through an onboard USB controller? If so would appreciate an IOMMU listing and note of which one you managed to pass through?!

Yup. Only tried one so far (lazy), but I will try others and edit post with a report on other ports. I have my IOMMU groups on my UCD thread (which is the link in my footer).

   <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x3'/>
      </source>
    </hostdev>

Just passed a second one to another vm.

root@HYDRA:~# lspci | grep -i usb
01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller (rev 02)
08:00.0 USB controller: ASMedia Technology Inc. Device 2142
0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller
42:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller

The last two controllers in the above list have been passed through, bus 01 is already populated with devices passed through USB device function in webui. Haven't traced 08 yet. Very painless.

Edited by Jcloud
Editted XML, vm was running.

Share this post


Link to post

@Jcloud  thanks, very good thread and useful info. I think the reason you are getting such good results is that the Asus BIOS is giving great separation for IOMMU groups. Is that with or without any ACS patch?

 

The board seems to have (lucky git!) FOUR separate USB controllers onboard... all in separate IOMMU groups too. Can you pass any or all of these to VMs?

 

IOMMU group 14: [1022:43ba] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller (rev 02)

IOMMU group 24: [1b21:2142] 08:00.0 USB controller: ASMedia Technology Inc. Device 2142
IOMMU group 30: [1022:145c] 0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller
IOMMU group 47: [1022:145c] 42:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller

 

Have you tried the exercise of mapping which ports (physically) connect to which controllers? That obviously helps you work out the best place to stick the unRAID stick and which controllers would have most ports free for use with VMs ?? If so would love to see your findings on some thread?

 

EDIT: I see not. You don't need to pass them through to see this. I think just plug a USB device into each port and work out which physical locations are which controllers. As per this post https://lime-technology.com/forums/topic/35112-guide-passthrough-entire-pci-usb-controller/

 

 

Not a huge fan of the Asus board cos its only a 6 slot design and only 6 SATA but if it really has 4 usable passable controllers then its ideal for a 3 GPU VM setup with each having USB3 controller

 

Q: That lonely PCIEx1 slot... could you chuck something in it (temporarily) and LMK if it comes up in its own grouping? If it does I will have no choice but to buy the board :-D

 

Edited by methanoid

Share this post


Link to post
4 hours ago, methanoid said:

Is that with or without any ACS patch?

ACS override is enabled in the BIOS and unRAID, yes.

 

4 hours ago, methanoid said:

Have you tried the exercise of mapping which ports (physically) connect to which controllers? That obviously helps you work out the best place to stick the unRAID stick and which controllers would have most ports free for use with VMs ?? If so would love to see your findings on some thread?

Nope not yet, hadn't had the need for it yet, but since you're interested, and it's useful data, I'll do it this weekend and post to my UCD.

 

4 hours ago, methanoid said:

EDIT: I see not. You don't need to pass them through to see this. I think just plug a USB device into each port and work out which physical locations are which controllers. As per this post

Yup yup :)

4 hours ago, methanoid said:

Q: That lonely PCIEx1 slot... could you chuck something in it (temporarily) and LMK if it comes up in its own grouping? If it does I will have no choice but to buy the board :-D

 

That's going to be a hard one, for me, I don't have a device to toss into the slot and test for you. Maybe I can find something inexpensive at my place of work to throw onto my charge-account.  (trying to save $$ for upcoming VIVE-Pro, which is going on my main VM ;) ) I'll also look around my workbench, at work, see if I can find something open to test with.

 

4 hours ago, methanoid said:

Not a huge fan of the Asus board cos its only a 6 slot design and only 6 SATA but if it really has 4 usable passable controllers then its ideal for a 3 GPU VM setup with each having USB3 controller

When I first bought the board, I had the same opinion, however given how well everything has just worked, the "Pro's-list" has out-weighted the "Con's-list."  Only six SATA-ports didn't bother me, because my last board had ten BUT one controller was a Marvell chip, so I had to disable it anyways for VM's -- and had six functional ports.  

 

For a while I thought I had the same kind of issue ryoko227 has with secondary GPU, but after running with the 1050 card I think I was looking at UEFI incompatibility with the old graphics card I tried. 

Share this post


Link to post

So is this the official threadripper thread? Would be better if we had one in "motherboard and CPUs" but oh well.

 

Just thought to share my experience here

 

1920x

X399 taichi P1.80

32gb (4x8gb) trident z rgb ryzen version

Zotac gtx960 mini

 

Unraid was still on 6.3.5 when i changed platform from my old intel. Bios was in total default after upgrading from 1.30 to 1.80 and stayed with just normal unraid functionalities for over a day. No freezes or anything

 

On the second day, went to bios, enabled svm, vnc works well for my existing win 10 VM. Upgraded to 6.5.0, passing through the bios rom file to this vm just doesnt work. I guess its due to seaBIOS

 

Sure enough, created a new win 10 VM with OVMF, and it worked easily. Ran a game or two on CEMU and it worked well. 

 

Gotta try converting my seaBIOS VM to OVMF and also try audio passthrough. Am gonna have to try both onboard audio and a xonar dx

 

All in all it was a smooth n stable experience. Pays to not be an early adopter :P

 

Gotta monitor to see if the c state issue would happen to me or not. Seems like a ryzen issue and not threadripper. Have been running without disabling c6 and vm turned off for the day, so far so good. 

 

Btw, tried encoding a video which took over 5 minutes to encode previously on my i5 4460....took less than 40 seconds on this thing using the docker handbrake....

 

Share this post


Link to post
6 hours ago, ars92 said:

Pays to not be an early adopter :P

Haha I LOL'ed!

6 hours ago, ars92 said:

Seems like a ryzen issue and not threadripper. Have been running without disabling c6 and vm turned off for the day, so far so good. 

I admit to the same observation, had c-states on and somewhere along the way I turned them off but I didn't see much, if any, difference on my TR.

 

6 hours ago, ars92 said:

Btw, tried encoding a video which took over 5 minutes to encode previously on my i5 4460....took less than 40 seconds on this thing using the docker handbrake....

That's my favorite CPU speed test too. 

 

I too found myself rebuilding VM's from Intel to TR cross-over, I wasn't that shocked about it really (when I had to do it).  I found myself making the re-install exercise another pseudo-benchmark.

Share this post


Link to post

Ok tried out audio passthrough today. Hdmi audio worked well from the start but i wanted to try onboard audio. 

 

Unfortunately for this i had to enable acs override and with multifunction. How i wish asrock can make the iommu default to be like how it is with these syslinux settings.  

 

I have a xonar dx on my old rig, will definitely move it to the threadripper and test. I believe it should work well without needing acs override. I dont think i want to keep this setting enabled all the time. 

 

Apart from that, i was able to successfully convert my existing seaBIOS VM to OVMF using the guide from one of the unraid thread, just had to reinstall my gpu drivers tho. All good after that, even after restarts of the VM. 

 

My temp stays between 40 and 45 for idle and normal gaming load. Not sure if 4.14 reports threadripper temps properly as i think official support only came in 4.15

 

Side note: onboard audio have really came a LONG way. Definitely usable now. Front panel audio still sucks tho lol. 

Share this post


Link to post
On 2/6/2018 at 11:28 PM, ryoko227 said:

<SOLVED> 

vfio-pci 0000:0a:00.0: BAR 3: can't reserve [mem 0xe0000000-0xe1ffffff 64bit pref] 

vfio-pci 0000:42:00.0: BAR 3: can't reserve [mem 0xc0000000-0xc1ffffff 64bit pref]

 

TL;DR version:

efifb is being loaded into the area of mem that is trying to be reserved by the GPU. Having bios and unRAID load into Legacy mode and not UEFI keeps this from reserving the mem location and allows the GPU to pass-through correctly. 

 

@Jcloud I'm not sure if you are still struggling with this, but maybe this might be helpful?

 

Follow up on this mainly to keep my notes all together for anyone else who runs across this issue/post. Originally posted by Bryan Jacobs at Re: [vfio-users] "Device or Resource Busy" and later referenced by unRAID users @brando56894, @realies, and brought to my attention by @Rhynri. You can in fact enable UEFI even with the issue noted above! Simply run the commands listed below prior to starting a VM.

 

echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

 

What I ended up doing to automate this for me was using the User Scripts plugin by @Squid.

Something akin to this.

 

Add New Script
Name

efi-framebuffer_unbind

Desciption (optional)
Startup script which unbinds the efi-framebuffer after array startup to allow proper GPU passthrough and driver loading in VMs.


Edit script (required)
#!/bin/bash

 

echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind


Scheduled At Startup of Array

 

Then just startup your server UEFI with UEFI enabled in unRAID and you'll be back in business. :D

 

 

Many thanks to everyone listed above for their work and efforts!!!

Edited by ryoko227
Minor changes and additions

Share this post


Link to post
5 hours ago, ryoko227 said:

@Jcloud I'm not sure if you are still struggling with this, but maybe this might be helpful?

Very cool fix! Glad you're up and running. My issue ended up being, I think, the graphics card being too old for UEFI. I'm definitely going to squirrel-away your above notes in my own notebook for reference.

 

And thanks for flagging me, and checking up on me.

Have a good weekend.

Share this post


Link to post

Is anyone running TR without the Zenstates C6 thing? That nerfs the power saving so I'd prefer to not have to do that!

Share this post


Link to post
14 minutes ago, methanoid said:

Is anyone running TR without the Zenstates C6 thing? That nerfs the power saving so I'd prefer to not have to do that!

 

I'm not running it on my Asrock Taichi, 2 months 24/7 on and no issues.

Not sure how well its been implemented on other companies TR mobos, worth testing if you can afford the downtime.

Share this post


Link to post
1 hour ago, methanoid said:

Is anyone running TR without the Zenstates C6 thing? That nerfs the power saving so I'd prefer to not have to do that!

I've never done the zenstates thing. No problems. Asus ROG Zenith Extreme mobo.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.