Replacing my FX-8350 based System


Recommended Posts

My system is getting pegged all the time, it can no longer keep up.  Time to upgrade and I'd like a little help.

 

I have:

16GB Memory

IBM-1015 -  4 drives

Parity and 3 data drives on the MB

1 cache drive on MB

1 drive not in the array on MB

 

My parity checks are now taking about 14 hours so they cannot complete overnight

My backups are taking longer than overnight

Plex media scan is taking too long

 

All this is taxing my computer, it is often at 100% CPU and gets quite slow to respond.

 

I'd like to add hardware transcoding for Plex via an Nvidia GPU with the Nvidia plugin as well as get better performance for everything.

 

What I'm looking at: https://pcpartpicker.com/list/6JpVr6 (if you just want to see)

Ryzen 5 3600

ASRock X570  Taichi ATX AM4 (I have a case large enough for this)

G.Skill Trident Z 32 GB DDR4-3200 Memory

GeForce GTX 1660 6GB SC Ultra Video Card

 

Comments?

 

Any other recommendations?

Edited by lovingHDTV
Link to comment

Been looking more and put together a Xeon based solution.

 

The CPUs are used, but the other bits are new.

 

2x E5-2680V3

Supermicro X10DRD-it https://www.supermicro.com/en/products/motherboard/X10DRD-iT

4x8GB RDDR4 ECC Memory from Supermicro recommended list

 

Each Xeon is about 10% less that the Ryzen in Passmark

This system is way more extensible for future if needed.  I could add GPU, I could upgrade the CPUs, more memory, etc

 

I'm not familiar with Supermicro, but it seems to have a good name around here.

 

Anything special I showed be watching out for?

 

thanks

david

Link to comment

I went with option 3 :)

 

Option 1: system is not extensible, as my load grows I'm already maxed out

Option 2: I was concerned about being able to add a GPU later for transcoding

Option 3:

  2x E5-2680V3 - I can easily upgrade these in the future as needed to get more passmark

  Supermicro X10DAC - dual CPU, plenty of PCIE3.0x16 and memory slots, IPMI, on board SAS controller and as it is a workstation board GPU support.  I've seen posts on the forum where someone else has used the board.

  4x16 registered ECC memory

 

This will go in my existing coolermaster tower case (it supports E-ATX).

 

I'll post pictures when it all shows up in a week or so.

 

david

Link to comment

Got most my items today.  The two cpu coolers, while in the comparison table on Amazon said they were LGA2011v3 compatible weren't.  Sending those back and ordered two different ones.  Double checked on the manufacturers site to ensure they are compatible.  Power supply (Corsair TX850M)  showed up, but I need two cpu ATX 4+4 power plugs for this board and there is only one with the power supply.  So ordered another plug.

 

Wednesday is the new day for the new parts.

 

 

Link to comment

That is my goal.  I would love to virtualize my pfsense box and unraid probably on ProxMox as that is what I'm most familiar with. It would help fix some networking bottle necks I'm experiencing without having to figure out how to get LAGG working between my switch and pfsense.

 

Beware: if you buy a refurb Corsair power supply from Amazon it will contain only half of the advertised module connectors. buying them on the market makes it cost almost the same as a new power supply with a limited warranty.

 

So I returned the Corsair and bought a new EVGA G2 850.

 

david

Link to comment

Got my parts in, installed both cpus and heat sinks, memory, graphics card and front panel (power/reset).

 

Powered on and the power on LED lights and the SAS alive LED blinks.  No beeps, like you would typically get after POST.  No POST.

 

Took the memory out and I get the beep codes that says no memory installed, so something is alive.

 

Hooked it all back up again, still no beep, no POST.

 

ideas?

 

david

Link to comment

Try with only one stick of RAM. If it doesn’t post, swap that for another one. This will rule out one bad stick holding up things. Also ensure memory is in the correct slots / bank for the number of sticks you are using.

 

beyond that, it may be a poorly seated cpu, insufficient power or any number of other problems.

 

does the MB support boot with a single cPU? If so, try each one in turn to rule out a bad processor. I’m sure you have already but RTFM to ensure there are no hardware switches or headers you need to configure.

 

Do you have a different PSU in another machine you can substitute in? 

 

disassemble everything and rebuild absolutely bare bare bones components outside the case to troubleshoot some more.

 

good luck!

  • Upvote 1
Link to comment

Update:  the machine works well. I'm just trying to get ProxMox to pass the GPU through to unRaid for Plex.

 

I ran two preclears at the same time and sustained 150MB/s on both of them.  I was happy with that.

 

Here is a picture of it on my workbench.  I didn't have a front panel laying around so grabbed as spare tower and used it for testing.

 

I'm still not 100% on the memory slot positioning.  I know that for dual channel systems they recommend a DIMM for each channel to improve memory bandwidth.  The supermicro board manual doesn't provide any details.  So do I put them both in blue or one in blue and one in black?

IMG_20190818_163130607.jpg

Link to comment

flashed the SAS 3008 to IT mode.

Got my SASmini-hd cables.

 

Everything is working on bare metal, SAS, SATA, GPU shared to Plex Docker.  I lose GPU sharing when running under ProxMox. I can see the GPU in unRaid but something still isn't quite there because it fails to load.

 

I'll play a couple more days trying to get GPU sharing down, but after that I'll just deploy the bare metal build.  I may skip ProxMox even if I get GPU sharing to work as I've not got a lot of confidence in ProxMox for GPU sharing right now.

 

Maybe time to look at HyperV?

 

 

Link to comment
12 hours ago, uldise said:

choose VMWARE Esxi then.. i have no experience with GPU passthrough, but other than that every thing works just fine, i have two unraid VMs, each with own HBA passed through.

Ok I took a few hours today, created an account, downloaded, installed and created a VM on ESXi.  There was a nice walk through here

 

As my goal for a VM solution was stability and ease of use and both have failed to deliver, I'll now just deploy the system as is bare metal like I've had running since UnRaid first got release so many years ago.

 

Edited by lovingHDTV
Link to comment

Before giving up on virtualization, I tried Hyper-V server 2016.  It took a bit to get it setup so that I could remote manage it from my desktop.  Finally found an article that steps you what needs to be done for authentication if you don't have AD running.  I'd imagine most home users don't have AD setup.  Here it is in case anyone else chooses to go down this route.

 

https://blog.ropnop.com/remotely-managing-hyper-v-in-a-workgroup-environment/

 

With Hyper-V I found that they don't support exporting the GPU anymore.  Not sure if there was a way, but as that was my sticking point I baled on Hyper-V as well.  I also read they didn't export the USB drive, but I was hoping to maybe use Plop to work around that, but didn't explore that at all.

 

I could go with ProxMox and run Plex in its own VM, but I really like the integrated manner of UnRaid.  Can you create a virtual switch in UnRaid to have VMs talk to each other?  

 

This AM I started to pull my old system a part and replace it with the new hardware.  I pulled the old power supply and MB out.  All I left were the drives.  I then had to reconfigure the stand-offs as I was going from a m-ATX to an e-ATX.  Yep that one little letter means a whole lot!!  My case supports e-ATX, but there were two stand offs that didn't align.  The upper left (by the memory and IO plate) was off by half the bolt thickness.  I could push and prod to see half the thread, but couldn't get the screw in.  I was able to leave the stand off in the system to have it support the board, but no grounding there.  The next was the center top.  For whatever reason there isn't even a stand off any where near here.  I don't know if the standard changed or if my case just came out early (it is my original cooler master I bought when I first built my UnRaid box back in 2002?).

 

I proceeded to install the power supply/cables, graphic cards and took the time to swap out some 4-in-3 cases with others I had that all match and have a CoolerMaster face plate.  I put both my new GTX 1060 and my old card in the hopes that I could use my old card for UnRaid and then pass the GTX 1060 to Plex.  However after booting I heard 8 beeps and nothing happens.  So I look up beep codes in the MB document and it isn't there.  Hooked my monitor up to the Nvidia card and the screen lit up.  I got an AMI error Out of PCI-E resources.  I've never heard of that. So the swapped the slots of the two cards and rebooted.  8 beeps again.  For some reason it doesn't like both cards at the same time so I ditched the older card.

 

I had a 4T drive I wanted to put into the system and swap out a 2T drive so I installed that drive as well.  It is now preclearing at 53% done and 153MB/s and at the same time doing a parity check at 150MB/s.  I'm happy with that.  My parity checks previously were 90MB/s.

 

Here is a picture after I was all done.  I have a hole in the bottom as that is were the second power supply can go.  I only use a single supply now that they are powerful enough.  I wish I had a plate to block it off.

 

I've ordered a couple more case fans to replace the old ones.  They will be PWM controllable and have places on the MB to plug them in.  Today they are the really old ones that plug into the old style power plug.  I'll be able to pull that power cable out when I get the new fans.

IMG_20190824_083305887.thumb.jpg.1f9b3626c44b165619ef4ed3a3df9173.jpg

 

Now the front with matching 4-in-3 bezels.  Yep my 'old man' glasses as I can't see the front panel connections anymore.

IMG_20190824_131139922.thumb.jpg.1bc5d5ea0820340a46315bc2521b1268.jpg

 

And the best shot of all, the Dashboard

Dashboard.thumb.jpg.a00afc1ccf3c3a26cd9ee8d3a0c5890b.jpg

 

 

Link to comment
On 8/21/2019 at 4:56 AM, lovingHDTV said:

I may skip ProxMox even if I get GPU sharing to work as I've not got a lot of confidence in ProxMox for GPU sharing right now.

 

7 hours ago, lovingHDTV said:

I could go with ProxMox and run Plex in its own VM, but I really like the integrated manner of UnRaid.

Then why don't direct virtualize under Unraid and trying different hypervisor.

 

IMG_20190824_083305887.jpg

 

CPU airflow in pull direction ?

 

Edited by Benson
Link to comment

I tried every hyper visor I could find.  None of them shared the GPU in a manner that the unraid-nvidia plugin would recognize the GPU.

 

Good question on the airflow.  I did my best to draw it up.  There are 3x120mm fans in the 4-in-3 adapters, one 80mm fan in the top, the power supplies fan and the 120mm in the back of the case.  The final fan is an 80mm in the side panel that brings in fresh air immediately below the cpu fans. The CPU fans move air upwards.

 

Airflow.thumb.jpg.3fb48519757953c06a6d1d9deb4290b5.jpg

Link to comment
2 hours ago, lovingHDTV said:

Good question on the airflow.  I did my best to draw it up.

If the hole doesn't have a fan forcing air out, it needs to be blocked off. Otherwise your exhaust fans will just be getting make up air from the open holes instead of pulling air over the drives. The only place that should allow incoming air is through the drives.

 

Clear packing tape is good for sealing up holes.

Link to comment
34 minutes ago, lovingHDTV said:

I suspect that air exits the hole in the bottom.  I'll test that out and see.

I doubt that very much. Air follows the path of least resistance. If the intake fans are being properly ducted so none of the air can bypass direct contact with the drives, the number of fans doesn't much matter. It takes a lot of force to push air through the small passages beside the drives, so if there is an easier path bypassing the drives, that's where the air will go. It's your job to constrain that air and make it go where you need it to move the heat. In a server with a load of drives, the first priority is the drive wall, then the CPU, hard drive controller, memory, motherboard, and GPU.

 

If you don't duct the flow where you need it, you will end up with a lot of flow that bypasses all the critical components, and pockets of heat that don't escape.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.