Anybody planning a Ryzen build?


Recommended Posts

9 hours ago, phbigred said:

Big question is are you doing passthrough of GPUs for the VMs? If so that would likely be the only sticking point as Ryzen doesn't have an integrated GPU on the die. You'll need to follow Gridrunner's video on passthrough if using Nvidia in the first slot. Not a huge deal but also look at isolating the VMs from the OS leaving a few cores for the OS. Also my VMs run decently with 8GB of RAM but figure plex needs about 4GB with the OS to run decently. Isolate plex cores separate from the VM using --cpuset-cpus=0,1 at least for plex to isolate. Otherwise Godspeed and good luck!

 

 

I wont be passing through any GPU's, mainly as a plex server.  Thank you for all the advice.

Link to comment
9 hours ago, Dazing said:

 

You are asking if using hardware that is not (yet) supported by the software will work perfectly, while there is a chance that it will indeed work free of issues, the risk that it wont do it is much greater. Reading though the thread will inform you of some of the issues with Ryzen and Unraid at the current time.

 

Thanks for the reply.

 

I appreciate it isnt YET supported, I just wanted to know if it works for what I need. If it does then its a potential upgrade now, instead of waiting.

Link to comment

About to get the gskill flare x 3200mhz 16gb (2x8gb) kit today. Hopefully this will give the best stability for my ryzen build. Ive been having constant issues with the oem ram status codes. Btw that was with corsair lpx 3000 16gb kit.

 

Maybe we should start a new thread for general ryzen discussions/faq stickie perhaps? So tempted to just quit on ryzen for unraid and go get a e5-2660v2 

Link to comment

Well, I've had two crashes so far and I'm only trying to pre clear a disk.

First @ 6 hours - Memory was running at 2666

Second @ 8 hours - Memory was running at 2133

 

I'll leave it running a memtest for now, looked through the bios couldn't really find anything that looked worth changing. 

 

Link to comment
7 minutes ago, HellDiverUK said:

I had problems with TeamGroup Vulcan DDR4-3000 RAM, even at 2400MHz (that RAM worked fine at 3000 on my Kaby Lake machine).  Corsair Vengeance LPX DDR4-3200 works great at 2933 on Ryzen.

 

I don't have other memory to try at the moment, its all or nothing. I bought the DDR4-2666 memory.

 

Well I wanted to be good and do a preclear on all the drives, but so far its crashed at 65% pre-read and second attempt I skipped pre-read it crashed at 96% preclear(zero stage)

Think I'll have to start it without the preclear for now and maybe reset my config at a later stage. My main data is on my old drives, which I've not touched yet. Although I'll just crash while creating parity I expect.

 

If I get further crashes I may have to try Windows for a day or two just to check stability. I've only managed 1 pass on memtest, still haven't got my keyboard working in memtest to test multi threaded.

Edited by Tuftuf
Link to comment
8 hours ago, Tuftuf said:

Well, I've had two crashes so far and I'm only trying to pre clear a disk.

First @ 6 hours - Memory was running at 2666

Second @ 8 hours - Memory was running at 2133

 

I'll leave it running a memtest for now, looked through the bios couldn't really find anything that looked worth changing. 

 

 

Have you disabled "Global C-state Control" yet?  If you haven't read through this long forum thread, you may have missed that Ryzen has major stability issues on unRAID, and disabling "Global C-state Control" seems to resolve the issue.

 

It's also been noted that the problems seem to happen more frequently the more "idle" your server is.  So a server sitting there doing nothing (and going into lower power C-states) is more likely to crash on unRAID.  On the flip side, running lots of VM's and keeping the server busy seems to make it less likely to crash (but it will still crash eventually).  It's backwards from what you would expect.

 

On my Ryzen server, I went from just a few hours of up-time before constant crashes, to now 38 days of up-time and counting with it disabled.

 

The only drawbacks seem to be that, by disabling C-states, idle power consumption and heat increases.

 

I also have a link in my signature below back to the original posts regarding this issue.

 

-Paul

Link to comment

You were indeed right(I think), but honestly I could not find the option in my bios at first and I did look for it, only found it a few hours ago and I'm still testing. Reading this thread pretty much pushed me into buying the system even though I'm away all weekend and next week.

 

But I found the option today hidden away, time will tell regarding stability. Now I'm stuck due to having a Nvidia 670 card and needing a second gpu to dump the bios prior to passthrough. Unfortunately the only other GPU in the house is a dead 580 that won't even power on. Looks like I may need to place another order.

 

I have tried GPU passthrough even though it won't work yet based what I've read, at the moment if I start the VM with GPU it just causes the system to hang.

 

EDIT.. Got ahold of a 7950, now I have 2x Win VM with each using a 670 or 7950. Plus 670 works by itself in primary with the rom file edit. So far so good.

Edited by Tuftuf
updates..
Link to comment
6 hours ago, Tuftuf said:

You were indeed right(I think), but honestly I could not find the option in my bios at first and I did look for it, only found it a few hours ago and I'm still testing. Reading this thread pretty much pushed me into buying the system even though I'm away all weekend and next week.

 

But I found the option today hidden away, time will tell regarding stability. Now I'm stuck due to having a Nvidia 670 card and needing a second gpu to dump the bios prior to passthrough. Unfortunately the only other GPU in the house is a dead 580 that won't even power on. Looks like I may need to place another order.

 

I have tried GPU passthrough even though it won't work yet based what I've read, at the moment if I start the VM with GPU it just causes the system to hang.

 

EDIT.. Got ahold of a 7950, now I have 2x Win VM with each using a 670 or 7950. Plus 670 works by itself in primary with the rom file edit. So far so good.

I think if you are running a AMD GPU there isn't the bios rom file needed in the first pci-e x16 lane. Good to keep a copy of your ROM file stashed somewhere too. Can come in handy if you part out the box and move it to another rig. 

Link to comment
On 2017-5-12 at 0:59 AM, phbigred said:

I think if you are running a AMD GPU there isn't the bios rom file needed in the first pci-e x16 lane. Good to keep a copy of your ROM file stashed somewhere too. Can come in handy if you part out the box and move it to another rig. 

 

Hi!

He needs the bios rom because he put the nvidia in the first pci-ex slot. Only if you put an AMD in the first place you don't need the rom. 

Also, sometimes that's not an option because if you are trying to passthrough a graphic card to a gaming vm, you want to get the 16x lanes. In many boards, the second pci-ex 16x is only 4x.

 

 

Sérgio. 

 

Edited by serguey bubka
Link to comment

Maybe this is just a small part of the major issues everyone is having with this bleeding edge CPU?

 

http://www.os2museum.com/wp/vme-broken-on-amd-ryzen/

 

Almost immediately since the Ryzen CPUs became available in March 2017, there have been various complaints about problems with Windows XP in a VM and with running 16-bit applications in DOS boxes in Windows VMs.

After analyzing the problem, it’s now clear what’s happening. As incredible as it is, Ryzen has buggy VME implementation; specifically, the INT instruction is known to misbehave in V86 mode with VME enabled when the given vector is redirected (i.e. it should use standard real-mode IVT and execute in V86 mode without faulting). The INT instruction simply doesn’t go where it’s supposed to go which leads to more or less immediate crashes or hangs.

  • Upvote 1
Link to comment
1 hour ago, serguey bubka said:

 

Hi!

He needs the bios rom because he put the nvidia in the first pci-ex slot. Only if you put an AMD in the first place you don't need the rom. 

Also, sometimes that's not an option because if you are trying to passthrough a graphic card to a gaming vm, you want to get the 16x lanes. In many boards, the second pci-ex 16x is only 4x.

 

 

Sérgio. 

 

Depends on the am4 board. Typically the second pci-e 3.0 x16 when populated with a card in the first slot runs at x8 not x4 speed. I was aware he did that in the first slot. I was talking if he was planning to keep the card in the rig. Almost all X370 and some B350 have this option with 2 pci-e cards. I have yet to see a card for gaming require x16 pci-e 3.0 throughput. Hell pci-e 2.0 x16 is just beginning to be a limiting factor. With this in mind pci-e 2.0 runs x16 at a theoretical 16GB/s and pci-e 3.0 runs x16 at a theoretical 32GB/s. So factoring in half the lanes of x8 on pci-e 3.0 is ~16GB/s. Performance difference between the lanes should be negligible. 

 

Link to comment

Can anyone confirm which x370 boards have their sensors detected in Unraid? I believe the asrock fatality pro works with the sensor plugin but the Asus prime does not. How about msi, gigabyte etc? Please list your board here if sensors work. Thanks

Edited by mikeyosm
Link to comment

Can someone please test something for me on their Ryzen setup?

 

Logged in to your VM, copy a 4GB+ file from a SAMBA share on UNRAID directly to the C:\ drive on your VM. 

Compare the transfer speed of this with copying the same file from the same SAMBA share to another SAMBA share (i.e. not the VM volume).

I get a 40-50% drop in transfer speed when copying to the VM but speeds are great directly between the shares, very odd.

The VM is Windows 10 or Windows Server 2012 R2 using 10Gb vNIC and 9000 MTU and the C:\ site on my nvme 950 pro drive.

Edited by mikeyosm
Link to comment
On 5/15/2017 at 11:46 AM, mikeyosm said:

Can someone please test something for me on their Ryzen setup?

 

Logged in to your VM, copy a 4GB+ file from a SAMBA share on UNRAID directly to the C:\ drive on your VM. 

Compare the transfer speed of this with copying the same file from the same SAMBA share to another SAMBA share (i.e. not the VM volume).

I get a 40-50% drop in transfer speed when copying to the VM but speeds are great directly between the shares, very odd.

The VM is Windows 10 or Windows Server 2012 R2 using 10Gb vNIC and 9000 MTU and the C:\ site on my nvme 950 pro drive.

 

Okay, so here are my test parameters:

 

The source file comes from a SAMBA share on UNRAID (\\cortex\disk2) and is 4.37 GB (4,695,246,094 bytes).

The first destination is C:\TEMP on my Windows 10 VM, which lives in \\cortex\cache\vDisk

The second destination is also a SAMBA share on UNRAID (\\cortex\cache\inbox)

 

In my opinion, the two destinations should behave similarly.  All tests were performed using "TeraCopy" running on the Windows 10 VM.

 

Test 1: Copying the file from \\cortex\disk2 to C:\TEMP took 70.50 seconds

Test 2: Copying the file from \\cortex\disk2 to \\cortex\cache\inbox took 32.81 seconds

Test 3: Copying the file from \\cortex\disk2 to C:\TEMP (again) now takes 35.38 seconds

 

My UNRAID system has 64GB installed, but 16GB is reserved for the Windows 10 VM.

 

My guess is Test 1 takes longer because the file is read from a HDD.  Test 2 & 3 run faster because the file is now cached in RAM (either in UNRAID or the Windows 10 VM).

 

The difference between Test 2 and 3 is not wholly insignificant, but hardly seems worth worrying about.

 

Finally, as I understand it, file copies here do not really involve the NIC since none of the data needs to go over the network?  My VM is running the Red Hat VirtIO Ethernet Adapter.  The hardware NIC is the Intel I211AT Gigabit LAN, integrated into the motherboard.  I did not change the MTU settings.

 

The \\cortex\cache drive runs on dual SAMSUNG 850 EVO 2.5" 1TB SATA SSDs, configured in a cache pool.

 

The \\cortex\disk2 drive is a Seagate ST4000DM000 4TB SATA HDD.

 

Let me know if you'd like me to run additional tests.

 

- Bill

Link to comment
16 hours ago, ufopinball said:

 

Okay, so here are my test parameters:

 

The source file comes from a SAMBA share on UNRAID (\\cortex\disk2) and is 4.37 GB (4,695,246,094 bytes).

The first destination is C:\TEMP on my Windows 10 VM, which lives in \\cortex\cache\vDisk

The second destination is also a SAMBA share on UNRAID (\\cortex\cache\inbox)

 

In my opinion, the two destinations should behave similarly.  All tests were performed using "TeraCopy" running on the Windows 10 VM.

 

Test 1: Copying the file from \\cortex\disk2 to C:\TEMP took 70.50 seconds

Test 2: Copying the file from \\cortex\disk2 to \\cortex\cache\inbox took 32.81 seconds

Test 3: Copying the file from \\cortex\disk2 to C:\TEMP (again) now takes 35.38 seconds

 

My UNRAID system has 64GB installed, but 16GB is reserved for the Windows 10 VM.

 

My guess is Test 1 takes longer because the file is read from a HDD.  Test 2 & 3 run faster because the file is now cached in RAM (either in UNRAID or the Windows 10 VM).

 

The difference between Test 2 and 3 is not wholly insignificant, but hardly seems worth worrying about.

 

Finally, as I understand it, file copies here do not really involve the NIC since none of the data needs to go over the network?  My VM is running the Red Hat VirtIO Ethernet Adapter.  The hardware NIC is the Intel I211AT Gigabit LAN, integrated into the motherboard.  I did not change the MTU settings.

 

The \\cortex\cache drive runs on dual SAMSUNG 850 EVO 2.5" 1TB SATA SSDs, configured in a cache pool.

 

The \\cortex\disk2 drive is a Seagate ST4000DM000 4TB SATA HDD.

 

Let me know if you'd like me to run additional tests.

 

- Bill

 

Thanks Bill. Test 1 interests me. I'll follow your test parameters so that my test results become a littler clearer...

My VM is Windows 10 on 950 Pro nvme. nvme drive tested with samsung magician and speeds are great.

 

Test 1: Copying 4GB file from \\unraid\7200RPM to C:\Temp = approx 50-60MB/s - I expect approx 110MB/s, so way too slow.

Test 2: Copying 4GB file from \\unraid\7200RPM to \\unraid\SSD = approx 110MB/s - Exactly whay I expect, maxing out the 7200RPM spindle speed.

Test 3: Copying 4GB file from \\unraid\SSD to C:\Temp = approx 150MB/s - Not what I expect. 250-400MB/s seems reasonable. 

Test 4: Copying 4GB file from C:\Temp to \\UNRAID\7200RPM = 110MB/s - What gives? Why is the speed better copying to the share and not from it?

 

I am at a loss why transfers to the VM from any SAMBA share are approx 50% slower. And as previously stated, if I copy between SAMBA share using Krusader or mc , i get full drive speed. Really not sure what's going on here.

 

Having said all of that, I remember only getting 600-700MB/s on my 950 pro nvme when used as an 'unassigned disk' in UNRAID . I should have got 1Gb-1.2Gb/s, so again, 40-50% off the expected speeds. None of these issues were present on my x99 platform, it's only Ryzen and my asus prime x370 that has less performance.

 

UPDATE - See Test 4 above: Strange why copying to the SAMBA share from the VM is faster than copying from the SAMBA to VM.

 

 

Edited by mikeyosm
Link to comment
7 hours ago, mikeyosm said:

Test 1: Copying 4GB file from \\unraid\7200RPM to C:\Temp = approx 50-60MB/s - I expect approx 110MB/s, so way too slow.

Test 2: Copying 4GB file from \\unraid\7200RPM to \\unraid\SSD = approx 110MB/s - Exactly whay I expect, maxing out the 7200RPM spindle speed.

Test 3: Copying 4GB file from \\unraid\SSD to C:\Temp = approx 150MB/s - Not what I expect. 250-400MB/s seems reasonable. 

Test 4: Copying 4GB file from C:\Temp to \\UNRAID\7200RPM = 110MB/s - What gives? Why is the speed better copying to the share and not from it?

 

Interesting, though I'd argue that Test 3 should be a repeat of Test 1.  I'm still expecting that the file is being cached after the first copy.

 

Also, for Test 4 ... I do not my cache drive to buffer writes to the array, all writes to the array are done directly to the array drives (and of course, parity) ... I'm not sure if Test 4 is writing directly to your 7200RPM, or is being cached by your SSD to be moved to the array at a later time.

 

If I did my math correctly, my Test 1 achieved 63.5MB/s, so maybe that's not out of line ... though my drives are all 5400RPM.  Your numbers *should* be faster.

 

Can we try this?  Pick a different 4GB+ file and run the test again, but this time run Test 2 before Test 1?  Can probably skip 3 & 4 for now.

 

- Bill

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.