I have a Norco 4224 with Super Micro X9SCM-F, Intel Xeon(R) CPU E31270, 32GB Ram - running ESXI 7.0. I run a couple VMs- 2 windows machines (one for downloading) and an unraid server. I have 24 drives populated on the server and as I need space I have been pulling smaller drives out and replacing them with bigger drives. The 24 drives are attached via SAS 3008 controller (each supporting 8 drives). I use a couple of the on board Sata ports for the ESXI OS and Datastores.
I recently got an Intel NUC 12 Extreme i9 to run my Plex server - and to handle any 4k transcoding. It came with a 10GB ethernet. I have unifi aggregation switch. I added an intel X540 dual port 10Gbe switch to my server (in the 4th PCI slot). I am not getting speeds from my VM's any faster than 1GB. I passed through one of the 10gbe ports to my unraid - and I am getting 3-4gbe speeds from my Intel Nuc to Unraid. But sustained read speeds at barely over 1gbe. (when transferring a file from unraid to my intel nuc).
I think the issue is that my Motherboard / CPU combo has become outdated and my PCIe slots can't handle the bandwidth.
Does anyone have any good recommendations for upgrade? I don't need to do any transcoding on the server - at least not part of my plan.
I would like a motherboard that supports 10GBe networking, can run ESXI 7.0 with a few VM's - and support the higher throughput from my SAS 3008 cards. As for the case - I am open to upgrading the whole case to something more robust.
Appreciate any suggestions.