New to Unraid - Repurposing an old 3770k rig, help choosing parts and which ports they should go in?


VisualHudson

Recommended Posts

In 2012 I built a rig that at the time was relatively about as good as you could get and I'm looking to repurpose as much of that as possible now in a new Unraid build. 

 

I will be looking to mainly use the new Unraid build as a Plex server as well as a separate backup storage (ie a NAS) for computers around the house and my camera SD cards. Currently I have been using the rig as a Windows 10 machine to host the Plex server, but I've recently built a new main rig so am now looking to finally make the switch over to Unraid after watching many people over the last few years on YouTube recommend it so highly. 

 

My current rig has the following specs:

CPU - Intel i7-3770k 

RAM - 32GB Corsair Dominator DDR3 1866Mhz

GPU - ASUS GTX 680 2GB

Mobo - ASUS P8Z68-V PRO/GEN3

PSU - Corsair 850AX (80Plus Gold) 

SSD - Samsung 860 Evo 2TB SATA3 (I think I plan to use this as a cache drive) 

HDD - various WD Reds, Blacks, White Label shucked Reds totalling about 30TB (I have half a dozen more 12TB White Label shucked Reds ready and waiting for the new Unraid server to be built)

 

I have been using a CoolerMaster HAF-X case, but have bought a Fractal Design 7 XL for the new Unraid build. 

 

I have today purchased a nVidia Quadro P2000 5GB off eBay.

 

I am also looking at buying a 9207-8i HBA card flashed to IT Mode. I was actually going to buy two of the cards, but then I realised that I only need one plus a SAS Expander. I mentioned this to the eBay seller and he recommended I purchase an IBM 46M0997 SAS Expander card, although he doesn't sell them himself and couldn't vouch for it as his enclosure has a built in expander backplane. 

 

As I'm going to need to transfer the 30TB back into the new Unraid server, and for future benefits of backups and transfer speed, I'm looking to get add 10GbE adapters into both my rig and the new Unraid server. This is currently where I'm a bit stuck. I see people recommend Mellonox ConnectX cards, but there seem to be so many different ConnectX cards not to mention all of the other manufacturers / brands I'm really lost as to which card/s I should be trying to purchase. I was hoping to use SFP+ given the speed and latency benefits, but I'm happy to listen to recommendations. 

 

I was planning on taking out the GTX 680 to use in my new rig for the time being due to the ridiculous difficulty of trying to get hold of a RTX 3080 right now. But I was hoping to be able to put it back in down the road, so that I can use that GPU for a VM or something like that. 

 

However this also brings me on to my next problem, I don't think this motherboard has enough PCI-Express ports for all these cards or I'm not sure which order I should be installing them in....

 

The motherboard manual lists the expansion slots as:

Quote

2 x PCI Express 3.0 / 2.0 x16 slots (single at x16 or dual at x8 / x8 mode)

1 x PCI Express 2.0 x16 slot [black] (max. at x4 mode, compatible with PCIe x1 and x4 devices)

2 x PCI Express 2.0 x1 slots

2 x PCI slots

* The PCIe x16_3 slot shares bandwidth with PCIe x1_1 slot, PCIe x1_2 slot, USB34 and eSATA. The PCIe x16_3 default setting is in x1 mode. 

** Actual PCIe speed depends on installed CPU type. 

 

My motherboard looks like this.

 

So excluding the GTX 680, my plan was to install the cards as follows:

  • Install the Quadro P2000 into the top / blue PCI Express 3.0 slot
  • Install the 9207-8i into the middle / white PCI Express 3.0 slot
  • Install the IBM 46M0997 SAS Expander into the bottom / black PCI Express port. 

 

However it's looking like the 10GbE cards all seem to need PCI Express too which at that point I would have run out of. I also would not be able to use my GTX 680 down the road as a separate GPU for a VM.

 

So a few questions:

  1.  Can anyone suggest a better way to order my cards in the various slots? Maybe to get better use of PCI-Express lanes and speed, or to free up to a slot for the 10GbE NIC &/or the GTX 680.
  2. Are there any other SAS Expanders I should look at getting outside of the IBM 46M0997? 
  3. I understand the IBM SAS Expander only uses the PCI-Express port for power, so is it not possible that could come from elsewhere, maybe one of the PCI-Express 2.0 x1 slots if I took the risk and dremmelled out of the right hand end side of the slot? Or, on the less risky side of things, maybe there is a different SAS Expander that is powered by SATA or molex that I could use instead?
  4. If the Quadro, the 9207-8i and the SAS Expander will all require PCI-Express 3.0 / 2.0 x16 slots (ie the Blue, White and Black slots), are there any 10GbE NICs that would only require a PCI Express 2.0 x1 slot or, I believe that this would be a longshot, maybe even one of the basic PCI slots?
  5. Am I correct in thinking that the 9207-8i and the SAS Expander should both basically be plug and play, as long as the HBA card is flashed to IT mode, or will there be extra work I need to do to get these to work and for my drives to show up?

 

I'm sure that I will have many more questions as I progress through this new build, but I think that about covers my uncertainties at the moment.

 

Any help would be greatly appreciated! 

Link to comment

The CPU will be the limiting factor, here:

No VT-d means no pass-through (this is the downside of the early K processors)

PCIe is limited to 1 x16, or 2x8, or 1x8 and 2x4

 

If you source a different non-K variant, your setup should be fine - get an Intel expander, no slot needed as it can be powered by a molex connector - res2sv240

 

 

Link to comment

Thanks for the response!

 

I have a quick Google and read through on VT-d now you've mentioned it, but it'd not something I've come across before, don't really understand what it is. Could you give a brief explanation of what it is or why it's important?

 

I had seen the RES2SV240 recommended before, but it looks to be incredibly expensive on eBay unless you're willing to order one in from China, in which case they're hundreds cheaper. Should I be dubious of doing that? But this was the main reason why I was leaning more towards the IBM 46M0997 as it can be found at a much more reasonable price locally (on eBay).  

 

But lets say I did get the RES2SV240 and I stayed with my 3770k, would I be correct in thinking that I would have a few options?

  1. Put the Quadro P2000 and the 9207-8i in both at PCIe 3.0 x8. I then have the option to power the RES2SV240 using the third PCIe port and forget about using the GTX 680 at all in this build. However this would still not give me a 10GbE NIC. 
  2. Put the Quadro P2000 in at PCIe 3.0 x8 and the 9207-8i in at PCIe 3.0 x4 (or would they both run at x8 again??), power the RES2SV240 using molex and then put the GTX 680 in the last port but it would run at PCIe 2.0 x4 (or would it be x16? either way it wouldn't be a massive concern as it would rarely be being used, and when it is being used, it's not going to be for highly "critical" game playing that requires the world's best framerates). However this would also still not give me a 10GbE NIC. 
  3. Put the Quadro P2000 in at PCIe 3.0 x8, the 9207-8i in at PCIe 3.0 x4, power the RES2SV240 using molex and put a 10GbE NIC into the third slot but it would run at PCIe 2.0 x4 (or would it be x16?). Would it be better swapping the order of these? Either way, this would also mean forget about using the GTX 680 at all in this build. 
  4. Use the Quadro in the top slot, use the GTX 680 in my new rig until I can finally get my hands on a RTX 3080 (which I'm planning on doing anyway) and then attempt to sell the Quadro and put the GTX 680 back into the top slot. Then as with Option 3 the SAS card would go in the 2nd slot, the expander would be powered by molex, at the 10GbE NIC in the third. The drawback of this being that the GTX 680 isn't as good at Plex and that I would lose the ability to run a VM with a dedicated GPU. 
  5. Sell the Quadro straight away, keep the GTX 680 in the top slot and just lose out on having a GPU in my new rig until I can get an RTX 3080. Everything else would be the same as options 3 / 4. 

At this very present moment, if I'm honest, I'm kind of leaning towards either option 3 or 4, but it would all be dependent on whether I can source a RES2SV240 without waiting months on end.

 

Also, as per SpaceInvaderOne's demonstrations on YouTube I was looking at getting a couple Mellanox ConnectX-2's, but are there any 10GbE NIC's you'd recommend?

Link to comment

Yeah, they have gotten expensive, I paid $85 new back in 2018!

 

In a nutshell, VT-d allows for directed I/O, which allows hardware to be assigned to VMs

 

The Mellanox cards work fine, but you'll need at least an x4 PCIe slot free

 

Depending on the  number of drives, you may need to keep the controller on an x8 lane or your parity checks will suffer - and I'd stay way from that marvel controller on the board.

 

 

Link to comment

A couple of thoughts.

 

6 of the 8 Sata ports on the motherboard are good, I wouldn't use the 2 Marvel based ones.

With the 8 ports on the SAS card you should be good for hard drives for a while, at which point you may be thinking of hardware upgrades.

Asmedia 2 port controlers also work fine in a PCI-E x1 so you have 14-16 drives to get you started.

 

The SAS card would be fine with 8 drives in a PCI-E 2.0 x4 slot

 

10G isn't that useful in a steaming / low user count space other than for quick flash to flash transfers. 

With how unraid is designed, the read / write speed is limited to that of a single disk, slower for write due to parity calcs. Typically you're capped around Gbit speeds during writes anyway with the parity calcs.  Teamed Gbit is usually fine for read which runs at the normal disk speed. A dual Intel lan on a PCI-E x 1 slot would be an option. If slots are at a premium, I'd comprimise network transfer rate since most of the traffic/download etc. is internal to the server.

You can always mount drives using 'unassigned devices' outside of the array via SATA or USB for much quicker transfer.

 

The encoder in the IGPU of old chips isn't great as it doesn't support many of the modern formats. If you pass through the P2000 to a Plex, it can serve double duty and unraid display and plex encode / decode (if you have plex pass). Same issue with reverting to the GTX680, you will need to be picky about formats or do quite a bit of offline CPU transcoding for your library. 

 

You don't need a second GPU for a VM unless you either need a display output or intend to 'stream' a game from the server. With the older, core limited CPU streaming options will be fairly limited. 

 

With the board you have, IO options are limited.

 

Either now or the future you could sell of the quadro and cpu/mainboard/memory and replace with a B365 board with quadcore or 6 core CPU... or whatever modern equivalent. The modern IGPU would be fine for Plex etc and you would have reasonable IO for expansion. The B365 boards expose more PCI-E lanes.

Probably only a bit of beer money in it.

 

Good luck

 

 

Edited by Decto
Link to comment
9 hours ago, Michael_P said:

Yeah, they have gotten expensive, I paid $85 new back in 2018!

 

In a nutshell, VT-d allows for directed I/O, which allows hardware to be assigned to VMs

 

The Mellanox cards work fine, but you'll need at least an x4 PCIe slot free

 

Depending on the  number of drives, you may need to keep the controller on an x8 lane or your parity checks will suffer - and I'd stay way from that marvel controller on the board.

 

 

See the RES2SV240 now is more like £250 on eBay from a UK seller, whereas if you get one from a US seller or China seller it's about £80 - 100. Are they all just selling the exact same thing, should I just consider the cheaper sellers and wait the weeks it might take to get here?

 

Why would you stay away from the marvel controller on this board? I've been using them so far for years and they've not ever caused any problems. Why might that be any different with Unraid?

5 hours ago, Decto said:

A couple of thoughts.

 

6 of the 8 Sata ports on the motherboard are good, I wouldn't use the 2 Marvel based ones.

With the 8 ports on the SAS card you should be good for hard drives for a while, at which point you may be thinking of hardware upgrades.

Asmedia 2 port controlers also work fine in a PCI-E x1 so you have 14-16 drives to get you started.

 

The SAS card would be fine with 8 drives in a PCI-E 2.0 x4 slot

 

10G isn't that useful in a steaming / low user count space other than for quick flash to flash transfers. 

With how unraid is designed, the read / write speed is limited to that of a single disk, slower for write due to parity calcs. Typically you're capped around Gbit speeds during writes anyway with the parity calcs.  Teamed Gbit is usually fine for read which runs at the normal disk speed. A dual Intel lan on a PCI-E x 1 slot would be an option. If slots are at a premium, I'd comprimise network transfer rate since most of the traffic/download etc. is internal to the server.

You can always mount drives using 'unassigned devices' outside of the array via SATA or USB for much quicker transfer.

 

The encoder in the IGPU of old chips isn't great as it doesn't support many of the modern formats. If you pass through the P2000 to a Plex, it can serve double duty and unraid display and plex encode / decode (if you have plex pass). Same issue with reverting to the GTX680, you will need to be picky about formats or do quite a bit of offline CPU transcoding for your library. 

 

You don't need a second GPU for a VM unless you either need a display output or intend to 'stream' a game from the server. With the older, core limited CPU streaming options will be fairly limited. 

 

With the board you have, IO options are limited.

 

Either now or the future you could sell of the quadro and cpu/mainboard/memory and replace with a B365 board with quadcore or 6 core CPU... or whatever modern equivalent. The modern IGPU would be fine for Plex etc and you would have reasonable IO for expansion. The B365 boards expose more PCI-E lanes.

Probably only a bit of beer money in it.

 

Good luck

 

 

Thanks for taking the time to write out a long thought out response!

 

You actually bring up some very good points. I've been spending all this time thinking about the SAS expander and especially for the time being I probably don't even really need it. 

 

As I asked the guy above, why would you recommend avoiding the Marvel controller? If I was to follow your suggestion, how would you feel if I used the Marvel SATA ports for my Samsung 860 EVO SSD that I plan to use as a cache drive and only use the SAS card / Intel SATA ports for all of the HDDs in the array?

 

I wouldn't be expecting the 10GbE to give me any benefit for streaming from Plex, it would literally just be to have the fasted transfer speeds possible between my new PC and the Unraid server. As I have 32GB of RAM and I know I won't need all of that on Plex, and I have 128GB of RAM on my new PC, I might do RAMDISK to RAMDISK transfers, or at the very least ideally directly to the SSD cache. I may also add in a second SSD down the road as an unassigned drive just to have a faster bit of storage on the server than the main array. 

 

Your idea of mounting drives as unassigned devices is actually something I've recently been thinking about actually as a much quicker way to get all the 30TB of content I currently have back into Plex within the new Unraid environment. Whilst I've not yet ever used Unraid, cut and pasting it from within the same system must be quicker than transferring it over a network even if it is 10GbE. 

 

I mean, so far for the last few years I have been getting buy using my GTX 680 and my CPU so I suppose there's no reason why I couldn't continue with that outside of eventually hitting the limit of whatever the two can do together within Unraid. But I don't know if I would revert back to the GTX 680 or just stick with the Quadro. 

 

Using a VM isn't the upmost priority of this build. But I'm not sure I understand what you mean by "need a display output or intend to 'stream' a game from the server"?

 

The more that I think about it, maybe I will use what I've got now / planned for now for Plex, then instead of running a VM on this machine I'll purchase a new CPU, mobo and RAM sometime down the line and then transfer the server again but over to that new hardware. I can then use my current hardware either for another Unraid server solely for a VM or simply just a traditional Windows install seeing as my current specs still run everything perfectly fine for the most part. That would also allow me to put the 680 back to use too. But that's an idea for a distant time. I can't really afford to be going to buy essentially an entire new computer right now.

 

 

Link to comment

The marvel controllers are known to have some issues so best avoided for the array.

You don't want the cache dropping offline either, though it may be OK to use Marvel for unassigned devices when you are copying data as dropping a disk won't put the array offline, trigger rebuilds etc.

 

 

Transfer into the array is mostly limited to the speed of the slowest disk, there is also a parity calculation overhead.

Transfer speed isn't usually more than Gbit speeds even if you add the disk as unassigned, though it will be as fast as possible and you aren't tying up your local machine for 100 hours! You can always setup without the the 10G cards, load the data by unassigned and then if you find network speed is an issue, add in the 10G card later. A support card will just start working on boot if it has the network cable in.

 

One common tip is not to enable parity until you have copied all the initial data across. As it's a copy, you have a backup. This will give you full disk speed early on and the slower speed later. 

 

It depends what you are doing with a VM. You don't need a graphics card at all if you plan to access via RDP or VNC etc, a display will be emulated. This falls down if you try to play video in the VM, but for configuration, web browsing etc. it's fine especially over a LAN. 

 

If you want to use the VM as a second PC with a monitor, keyboard and mouse you then need a discrete GPU, though you may be able to use the iGPU.

 

The hybrid is 'streaming', this uses clients such as moonlight, parsec or steam to encode the desktop view as a 60fps+ video stream while also connecting your keyboard and mouse back to the machine.  This would allow you to play a game, watch video etc on the VM remotely. The GTX6xx are the earliest GPU's that supported for this, however later GPU's have better encoders so will give you better stream quality etc. You can game to an extent on the quadro, and the P2000 is probably similar to the GTX680 as it's based on the GTX1060, however if you have linked this to Nvidia Plex edition, then it's isn't available for passthrough to the VM hence the need for the second GPU.

 

If your new PC is already running, jump in and give it a go, the 30 day trial can be extended twice by 15 days, and if you are still experimenting, you just need another $5 flash drive to start over, however all the files on the array will still be there and the array can be mounted in on a new install so the time consuming copy isn't wasted. 

 

Good luck

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.