Jump to content

Planning Future Upgrade | Need advice


Pstark

Recommended Posts

Currently I have a Prime Z590-V mobo and 10700k, 64gb non-ECC memory.

PCIE: intel 10gb nic, LSI card, 2x 1TB NVME SSD’s 

 

Usage: light to moderate VM usage with ~20 dockers running constantly and fairly heavy NAS usage

 

Currently I have to choose between using a graphics card (3060) or the NIC.

 

I want to be able to run the graphics card and NIC but I’m limited on PCIE lanes.

 

I’m currently researching what my options are for accomplishing this. I’m thinking to use ECC memory for increased SMB performance. I’m unsure on what CPU will have the PCIE lanes along with what mobo to use? Not looking to buy new and prefer something used but still decently modern.

Link to comment
On 3/3/2023 at 9:34 AM, JorgeB said:

I have one those boards and quite happy with it for NAS use only, but IOMMU groups are not great, so if you want VMs with device pass-though would recommend equivalent Intel platform, like the X11SPL-F, much better IOMMU groups.

The equivalent Intel chips seem to be considerably more expensive. Can you explain more on you comment on IOMMU groups?

Link to comment
10 hours ago, Pstark said:

The equivalent Intel chips seem to be considerably more expensive.

 

I bought mine used from Ebay at around the same price, X11SPL-F cost me 229€ (about 1 year and a half ago), H11SSL-i rev 2.00 195€ (about 1 month ago), CPUs were also about the same 200€ for a 10core Xeon and the same for an 8 core Epyc Rome, plus 220€ for 128GB REG ECC DDR4 2400 for the Intel and 400€ for 128GB REG ECC DDR4 3200 for the AMD (this latter one I bough new since couldn't find any good deals for 3200MT/s RAM).

 

Regarding the IOMMU groups it's known that AMD is usually not as good as Intel for device pass-through, both with desktop and server hardware, so we generally recommend Intel for that, doesn't mean the AMD won't work for what you want, but it might not work or be more difficult to get working, with this particularity board model I meant that comparing with equivalent Intel model where most devices are in their own IOMMU group, including bifurcated NVMe devices installed in the same PCIe slot:

 

image.png

 

For the AMD system, multiple add-on SATA controllers and onboard USB controller share the same IOMMU group, bifurcated NVMe devices are also in the same group:

image.png

 

Even the dual onboard NICs on the AMD share the same group (together with more stuff), that also doesn't happen with the Intel:

image.png

 

image.png

 

Now for my use it doesn't really matter, I bought the AMD for the number of PCIe lanes (in that respect it is much superior to Intel), and my VMs are on the Intel server.

 

 

 

Link to comment

I recently got the ASUS Pro WS W680-ACE IPMI.  The new W680 chipset boards might work well for you, without moving to a more server oriented platform.  Supports ECC on 13th Gen Core processors.  Enough PCIe lanes for most home server needs.  There are options with IPMI, which is a nice luxury for a home server.

 

Link to comment

A few comments:

 

1) Do you really need 16 lanes for your GPU?  On a 4090 (which would be worst case scenario) there is a few (1-5%) percent difference between Gen 4 x16 vs Gen 3 x16.  Gen 4 x8 is equivalent to Gen 3 x16.  See this video for reference:

 

 

2) Which Intel 10G NIC do you have?  PCIe Gen 3 or Gen 2?  I assume it is physically 8 lanes; if it is a single port PCIe Gen 2 x8 card, you can get 10G with 4 lanes.  I have done this and verified with iperf3.

 

3) The LSI card may also get by just fine with 4 lanes, depending on what PCIe gen it is, how many disks you have, and your workload.

 

4) Do you want 2 M.2 for redundancy or capacity?  There are several good 4 TB M.2 drives on the market, and even a few 8 TB ones.  There are also high capacity U.2 2.5" drives that can be used with an M.2 adapter.

 

5) There are adapters to go from M.2 to a PCIe slot.  There are also adapters to split a x16 PCIe slot into multiple slots.  Your Z590-V motherboard supports bifurcating the x16 slot (according to the manual). You might already have enough PCIe lanes with the motherboard you have, if you are willing to get creative.

Link to comment
  • 3 weeks later...
On 3/6/2023 at 2:23 PM, C4RBON said:

A few comments:

 

1) Do you really need 16 lanes for your GPU?  On a 4090 (which would be worst case scenario) there is a few (1-5%) percent difference between Gen 4 x16 vs Gen 3 x16.  Gen 4 x8 is equivalent to Gen 3 x16.  See this video for reference:

 

 

2) Which Intel 10G NIC do you have?  PCIe Gen 3 or Gen 2?  I assume it is physically 8 lanes; if it is a single port PCIe Gen 2 x8 card, you can get 10G with 4 lanes.  I have done this and verified with iperf3.

 

3) The LSI card may also get by just fine with 4 lanes, depending on what PCIe gen it is, how many disks you have, and your workload.

 

4) Do you want 2 M.2 for redundancy or capacity?  There are several good 4 TB M.2 drives on the market, and even a few 8 TB ones.  There are also high capacity U.2 2.5" drives that can be used with an M.2 adapter.

 

5) There are adapters to go from M.2 to a PCIe slot.  There are also adapters to split a x16 PCIe slot into multiple slots.  Your Z590-V motherboard supports bifurcating the x16 slot (according to the manual). You might already have enough PCIe lanes with the motherboard you have, if you are willing to get creative.

While I already have my new system up and running, I can confirm you are correct. 
 

I had to cut the slot and my HBA works in a x1 slot surprisingly. 

Link to comment

I just know here at work its all a windows shop. But the Dev group insisted on a AMD CPU server for the Data warehouse group with SQL. And they claim that AMD is much better for SQL server DB. I run a mix of crap at home. Mostly i depend heavy on my unraid servers. I tried using AMD for my backup unraid box. and I just found it to not be nearly as fast. Now I admit I'm not running Server Grade stuff at home. Its a mix of stuff left over from upgrades here and there. So I might not be the best person to reply here. But I will say out of all my reusing recycled parts and rebuilds of unraid boxes. With AMD's I generally did ok if it was doing VM's and dockers running on cache drives. But terrible at file sharing and spin disc. Didn't matter if I used HBA controller or not. Just terrible speeds. Maybe I could have reworked a few things and found a better way to use some of the drives. But with out really fine turning alot of things I just didn't have much luck on getting any where close to good speed. Now Intel on the other hand I consider it a more general over all use. Like I can run dockers and shares well. VM's do okay I feel the AMD's run them a little better. But yeah Intel does do Pass thru a bit better for VM's but I still feel the preform-ace is not as good.  So each have their own. I still need help figuring out how to setup my current hardware to be more effective.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...