Checking viability of new build


newunraiduser5

Recommended Posts

NOTE - I have posted the final configuration in post 16 of this thread. I will leave my original topic below unedited. However I will also update this thread once I get the the hardware and post any issues as a matter of record. Hopefully this can benefit other users.

 

Hi All,

 

I'm thinking of getting a new server to serve as a NAS, Windows gaming desktop, Linux workstation and general homelabbing.

 

I'm look at a refurbished supermicro server with the following specs/configuration

  • System/Chasis: SuperServer 6028U-TRTP+ - https://www.supermicro.com/products/system/2u/6028/sys-6028u-trtp_.cfm
  • Processors: 2x Xeon E5-2620 v3 2.4GHz 6-Core CPU's, (Passmark-15256)
  • Memory: 32GB (4x 8GB) DDR4 Registered Memory
  • Storage Controller: LSI 9311-8i (IT MODE)
  • Storage: Up to 12 SAS 3.5" drives
  • Graphics Card: MSI GeForce GTX 1650 Low Profile
  • Power Supply: 2x 1000W 80+ Platinum Power Supplies

 

I have a few questions

  1. I have 4 x 4TB WD Reds (SATA / 64MB cache / WD40EFRX / 5400 RPM / manufactured in Aug 2018). They are sitting in an Orico 4 bay external USB DAS enclosure. Is it possible to first purchase the server with 8 hard drives (out of the available 12 3.5" bays in the server), set up Unraid, migrate the data on the existing 4 bay DAS I have onto the 8 hard drives on the server then reformat my existing 4 hard drives and move them onto the server for the total of 12 drives?
  2. For the storage controller, it should already be flashed into IT MODE. Will that be sufficient? I understand that then it should be able to pass the drives to Unraid direct with all the SMART data intact etc
  3. While the seller does not have any available, is it possible for me to buy a PCIe to NVMe adapter (e.g. https://www.amazon.com/StarTech-com-PEX4M2E1-M-2-Adapter-Profile/dp/B01FU9JS94?th=1) and install an SSD/M.2 drive and use it as the cache drive to install the Windows VM / Linux VM?
  4. Last question on performance. Currently, the Orica 4 bay DAS is configured using storage spaces with 1 drive parity. It runs over USB 3.0. It is however extremely slow especially for write speeds (3 megabyte write per second, 20 megabyte read per second). I dont know if it is because of storage spaces parity or the USB 3. Do you think my proposed setup will be faster?

 

Thanks in advance for your help.

Edited by newunraiduser5
New build in later post
Link to comment

(1)(2)(3) Yes

 

(3) SAS2 controller already fine, cost cheaper a lot, blackplane should be RAW type, so you need 3 port for 12 drive, if only one 8i card (2 port), then 4 disk need connect to onboard or other controller.

 

(4) USB seems wrongly plug to USB2 (25MB/s max and simplex ), during writing, several session will share this bandwidth, so, slow also expect. Practical USB3 should be 300MB/s in duplex. You can verify parity check speed won't exceed 20MB /4 ~5MB/s.

 

Pls don't try troubleshoot with live disk, it will easy got big trouble with USB.

14 hours ago, newunraiduser5 said:

Do you think my proposed setup will be faster ?

Yes, but noise.

Edited by Benson
  • Thanks 1
Link to comment

Is there a reason you are buying a dual CPU server with a pair of low spec CPU's?

 

A single E5-2660 V3 CPU is 10C 20T and really cheap these days ~$80 -100

Dual CPU's are often less efficient and than and single CPU and comprimise memory , PCI-E config etc. + the space and noise of a 2U server.

 

Personally if you need/want a Xeon I'd look at the CPU above in a standard ATX workstation board such as Supermicro X10SRA-F

Using a standard tower case and standard ATX powersupply.

 

You can make it nearly silent and still have a good range of expansion options.

 

 

 

  • Like 1
Link to comment

NOTE - I have posted the final configuration in post 16 of this thread. I will leave my original topic below unedited. However I will also update this thread once I get the the hardware and post any issues as a matter of record. Hopefully this can benefit other users.

 

All, I've got a quote from a cheaper vendor and wanted to check the configuration below. Thanks to @Decto and @Benson for your input.

 

  • Chassis: Supermicro SuperChassis 826BE16-R920LPB 12x 3.5" + 2 x 2.5" Expander Dual 920W PSU
  • Motherboard: Supermicro MBD-X10SRH-CF, Single Socket 2011 v3/v4, 12Gbps HBA
  • Backplane: 6Gbps due to budget
  • CPU: Intel Xeon Processor E5-2678 v3 12 core 2.5Ghz CPU
  • Memory: 4 x MICRON 8GB PC4-2133P-R DDR4 REGISTERED ECC 1RX4 DIMM
  • Cache drive: 1 x Samsung 480Gb PM863a MZ7LM480HMHQ Enterprise SATA SSD
  • Main pool drives: 8 x Hitachi 4Tb 7K4000 7200RPM 3.5" SAS 6Gb/s Hard Drive with 4 x WD Red to be install after I migrate the data off
  • Controller: Not specified by the vendor but they understand its for an Unraid install and the need for pass through. THink this may be just
  • GPU: MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5

My questions are:

  1. For the GPU, if it is passed through to a Windows VM but Plex is installed via Docker in the main Unraid install, will there be conflicts? I.e. will Unraid / Plex not be able to use it for GPU transcoding so long as the Windows VM is powered on?
  2. If a passed through GPU can be used by the main system, is a workable solution to install 2 instances of Plex, once on the docker and once on the Windows VM and stream off which server has the GPU?
  3. The vendor mentioned that they need to test if the onboard GPU needs to be disabled for the discrete GPU to work and there may then be issues with the IPMI because that relies on the onboard GPU. In any case, this isnt a huge problem because it will be a home server and I will have physical access but wanted to see if anyone else here has had any experience
  4. What do I need to communicate to the vendor about the drive set up? i.e. should the backplanes be connected directly to motherboard's onboard storage controller? I am not sure if the motherboard in itself has enough onboard Sata ports so then is there an advantage to connecting to the motherboard direct vs the controller? I am also aware that if anything is connected via a controller, it needs to be have pass through to Unraid
  5. Similarly, for the cache drive, should it be through the controller or direct to the onboard motherboard controller?
  6. Is the 480 GB SSD sufficient? Plan on having a windows VM (with a few games but obviously all media storage will be in the main pool), docker and obviously the original purpose as a cache drive
  7. If I can get a second 480 GB SSD, what is the best way to set this up? Just two separate drives i.e. 1 for cache and one for VMs/Docker? Or is it better to get it into a pool via BTRFs RAID 0?

 

On 9/8/2020 at 7:19 AM, Decto said:

Is there a reason you are buying a dual CPU server with a pair of low spec CPU's?

Yep moving to a single CPU setup!

 

On 9/7/2020 at 7:40 AM, Benson said:

Yes, but noise.

Understand, my SO will thank you for bringing this up!

Edited by newunraiduser5
New build in later post
Link to comment

Single CPU and standard mainboard gives you more options if your needs change.

A 2U server will still be quite noisy, have a search for 'quiet down supermicro 2u server' for tips.

People seem to change the center fans, fit a 2U active cooler to the CPU etc.

 

1/2) I don't think sharing the card will work well and having two instances of plex is likely fraught with issues. The card will only do 2-3 steams due to Nvidia 'segmentation' unless you get into patching files. For now I would let the CPU handle the transcoding and find you're way around unraid.  If you need to, you can add a second GPU later, a cheap 2GB GT1050 would be fine for some GPU transcode offload. Keep in mind this chassis only supports half height cards though so you need to buy carefully.

3) You want unraid to boot the the inbuilt VGA. Unraid presents as text only console and for most people the configuration is then done through the web interface from another machine. Booting the the onboad VGA keeps the 1650 free for VM's etc. without VGA BIOS passthrough etc.

4) The motherboard you spec has a built in LSI SAS controller. I would expect this is connected to the SAS backplane. You need to check with the Vendor. it needs to be in Pass through / HBA mode and not RAID mode. This may require it to be reflashed in IT mode.

5) VM's etc should either run off Cache or a dedicated SSD for performance. Will 480GB give you enough space? You can add more later. I'd connect these to the motherboard directly and find somewhere inside the case to mount them. This avoids the issue that TRIM doesn't work with some LSI cards (not sure on the integrated version) and keeps your main slots free for drives. You could even use a half height PCI-E card that takes M.2 drives. Easy to update later so get whatever you need to get started. Not every directory needs caching. E.g. my documents directory has infrequent small writes so I cache it. Copying lots of Media files I to the Movies folder, I don't, it does straight to the array.

6) There are options, I like it simple so I have the VM's on a seperate drive. Will probably add a second VM drive later.

 

A couple others.

For a homelab I'd be quite happy with used drives, nothing critical that can't be rebuilt or recovered. However for mass storage of data in an array, setting up with a lot of drives that could be 3-5 years old is asking for failures. Drives wear out and they don't get an easy life in a data center. If you are going with the old drives, I'd configure double parity, but keep in mind that if you need to rebuid a drive, then all drives in the array will need to be 100% reliable for 12+ hours of full speed read to rebuild a 4TB drive.

Regardless of the above I would suggest you configure now with at least 8TB parity drive(s) then as your needs grow, you can drop in bigger drives without having to replace parity. Over the next few years likely 4TB drives will be more expensive per TB as they are less commonly used.

You also don't need to add at 12 drives to the array at once. You can setup the array with half a dozen drives and then add drives in at your leisure. I prefer not to have spindles racking up hours for no good reason so aim for 20-30% free space.

You can pre-clear the drives then it only take a couple of minutes to stop the array, add the cleared drive and restart. 

 

  • Like 1
Link to comment
4 hours ago, newunraiduser5 said:

4. What do I need to communicate to the vendor about the drive set up? i.e. should the backplanes be connected directly to motherboard's onboard storage controller? I am not sure if the motherboard in itself has enough onboard Sata ports so then is there an advantage to connecting to the motherboard direct vs the controller? 

- Mainboard already have SAS 3008 controller, no reason to add extra controller.

- SAS blackplane only can connect to SAS controller, it won't work will any SATA controller.

- Disk on SAS blackplane can't divide in different set, all ( 12 disks ) will under same controller by one or two cable.

- You quote SAS disk, you need to know Unraid not support spin down it.

4 hours ago, newunraiduser5 said:

I am also aware that if anything is connected via a controller, it needs to be have pass through to Unraid

- Just opposite, if Unraid are host OS, no passthrough need.

 

5 hours ago, newunraiduser5 said:

Understand, my SO will thank you for bringing this up!

If you concern this, you need carefully check the noise level of those hardware, most people like change to SQ series PSU.

  • Like 1
Link to comment
  • 2 weeks later...

NOTE - I have posted the final configuration in post 16 of this thread. I will leave my original topic below unedited. However I will also update this thread once I get the the hardware and post any issues as a matter of record. Hopefully this can benefit other users.

 

Hi All, close to the final iteration of this server. I asked for a few changes to opimise the array and ram.

 

Chassis: Supermicro SuperChassis 826BE16-R920LPB 12x 3.5" + 2 x 2.5" Expander Dual 920W PSU
Motherboard: Supermicro MBD-X10SRH-CF, Single Socket 2011 v3/v4, 12Gbps HBA

CPU: Intel Xeon Processor E5-2678 v3 12 core 2.5Ghz CPU
Additional HDD Kit: Supermicro Rear side dual 2.5" HDD kit for 6Gb chassis
RAM: Micron 16Gb PC4-17000R 2133Mhz ECC DDR4 (edit: should have 2 of these)
Parity Drives: WD Ultrastar 10TB 3.5" SATA HDD
Other array disks: HGST Ultrastar 7K6000 4TB 3.5" SATA 6Gb/s / a few existing 4TB WD Reds which I will put into the array once I move the data over
GPU: MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5
Other: Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP

 

Outstanding questions

  1. Getting the power to the GPU + any PCIe cards which may need more power - I understand the GPU chosen is just under the 75w so it should be ok but just wondering if I ever wanted to put something more powerful in there, what kind of connector do I need and where does the power normally in a workstation or PC setup? My understanding vendor is that there needs to be a "custom cabling solution", but I am not too sure what that means.
  2. Low profile PCIe USB card - I have this asked this in a separate thread but including here for completeness. Does anyone know or have a working low profile USB 3.0 or 3.1 card to pass through to a Windows 10 VM?
  3. Power to PCIe USB card - similar to question 1 above, quite a few of the cards, I can see a separate power connector. Anyone know what cable / connector and where do I connect this to?

 

Otherwise, does anyone foresee any issues with this build / setup?

 

Thanks all in advance for reading especially @Benson and @Decto.

Edited by newunraiduser5
New build in later post
Link to comment
11 hours ago, newunraiduser5 said:

what kind of connector do I need

The display card design not need extra power input, you can't make it powerful if connect more wire to it. If you concern due to power draw and burn-out the slot, then you may ignore, seldom case report this, just well plug in slot should be fine.

 

 

11 hours ago, newunraiduser5 said:

Anyone know what cable / connector and where do I connect this to?

Usually are SATA or Molex power plug.

 

 

11 hours ago, newunraiduser5 said:

RAM: Micron 16Gb PC4-17000R 2133Mhz ECC DDR4

Only one module ? Your CPU is 4 channel, better have two or four.

 

 

11 hours ago, newunraiduser5 said:

Other: Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP

Where this connect to ? It should be 8087-8088, but your mainboard was 8643.

Edited by Benson
Link to comment
31 minutes ago, Benson said:

The display card design not need extra power input, you can't make it powerful if connect more wire to it. If you concern due to power draw and burn-out the slot, then you may ignore, seldom case report this, just well plug in slot should be fine.

 

Yes I've had a look around and it appears the card does not need extra power. The whole power point was something that the vendor, being reasonably responsible, wanted to point out to me i.e. the chassis wont be able to have very high power GPUs in the event that I want to upgrade.

 

32 minutes ago, Benson said:

Usually are SATA or Molex power plug.

I have been told there probably wont be any SATA power plugs left so I will need to use the hack in the following thread to potentially deliver an additional 40 watts (2 x 20 watts) from Molex.

 

34 minutes ago, Benson said:

Only one module ? Your CPU is 4 channel, better have two or four.

Yes there is 2, I forgot to put that in. If I put 4 in, would that be even better as an upgrade later down the track?

 

 

34 minutes ago, Benson said:

Where this connect to ? It should be 8087-8088, but your mainboard was 8643.

Sorry I dont understand this. Would explain a but further? This was put in so that I can just attach actual JBOD enclosures down the track. I can just have a similar Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP in the JBOD enclosure and connect a whole heap of drives as well. Is there a misunderstanding here?

Link to comment
18 minutes ago, newunraiduser5 said:

Yes there is 2, I forgot to put that in. If I put 4 in, would that be even better as an upgrade later down the track?

OK, four will be better but in most case you can't found benefit. You have 8 slot, so you have much room for expand.

 

18 minutes ago, newunraiduser5 said:

Sorry I dont understand this. Would explain a but further? This was put in so that I can just attach actual JBOD enclosures down the track. I can just have a similar Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP in the JBOD enclosure and connect a whole heap of drives as well. Is there a misunderstanding here?

The internal connector are SFF-8087 but I can't found your setup have any SFF-8087 socket ( exclude blackplane ), so I ask where you connect this. ( Does your mainboard use SFF-8643 ? )

 

Straightforward topology was two port from mainboard, then one to blackplane and other one change to external. 

Edited by Benson
  • Thanks 1
Link to comment
12 hours ago, Vr2Io said:

OK, four will be better but in most case you can't found benefit. You have 8 slot, so you have much room for expand.

Yes, originally the vendor wanted to do 4 x  8 GB but I changed to 2 x 16 GB for expandability. 

 

12 hours ago, Vr2Io said:

The internal connector are SFF-8087 but I can't found your setup have any SFF-8087 socket ( exclude blackplane ), so I ask where you connect this. ( Does your mainboard use SFF-8643 ? )

 

Straightforward topology was two port from mainboard, then one to blackplane and other one change to external. 

Thanks, I will check with the vendor. I think they will connect to the backplane. Does it make sense?

Link to comment
6 hours ago, newunraiduser5 said:

Does it make sense?

No, then all internal and external device will under backplane expander and share same uplink bandwidth. Mainboard was 12Gb but backplane expander should be 6Gb only.

 

Single 6Gb uplink have ~2GB/s bandwidth, for 12 drive access in same time, then each disk have 167MB/s, so if connect more then device, the bandwidth would be under too much.

 

For the external adaptor, detachable type also recommend, i.e. 8088-8087 or 8643-8644, it depends on what type you want, 80XX cable usually cheaper a lot.

 

41+oJQ4AmFL._AC_.jpg

 

51jZcGqPBtL._AC_SL1200_.jpg

Edited by Vr2Io
Link to comment
3 hours ago, Vr2Io said:

No, then all internal and external device will under backplane expander and share same uplink bandwidth. Mainboard was 12Gb but backplane expander should be 6Gb only.

 

Single 6Gb uplink have ~2GB/s bandwidth, for 12 drive access in same time, then each disk have 167MB/s, so if connect more then device, the bandwidth would be under too much.

 

Gotcha. The backplane has 4 x 6gpbs lanes so that will be shared for the 12 hard drives and any further external JBOD. Not the absolute optimal but the alternative is to use a separate controller which will increase the cost. 

 

Given its 4 lanes, it should be have 4x the amount of bandwidth calculated. Given that Unraid isnt stripping and only spins up drives as needed, this should be sufficient right?

Link to comment

Hi All, there were a few minor issues with some parts not being available. I also purchased a few parts separately. This is the final build which has been shipped to me. I will post any configuration issues I have once it arrives so that it may benefit hopefully benefit other users. I also have a few additional questions so hope someone might be able to help me with any issues which comes across.

 

Chassis (Unchanged): Supermicro SuperChassis 826BE16-R920LPB 12x 3.5" + 2 x 2.5" Expander Dual 920W PSU
Motherboard (Unchanged): Supermicro MBD-X10SRH-CF, Single Socket 2011 v3/v4, 12Gbps HBA

Backplane (Changed): Previously the vendor was going to supply a 6Gbps backplane to keep costs down. It was also a Supermicro one so in theory it should have worked. However it didnt work with the onboard controller. So the vendor provided a 12Gbps backplane at the same price.

CPU (Unchanged): Intel Xeon Processor E5-2678 v3 12 core 2.5Ghz CPU
Additional HDD Kit (Unchanged): Supermicro Rear side dual 2.5" HDD kit for 6Gb chassis. This is for my SSD cache drives
RAM (Unchanged): 2 x Micron 16Gb PC4-17000R 2133Mhz ECC DDR4
Parity Drives (Unchanged): 2 x WD Ultrastar 10TB 3.5" SATA HDD
Other array disks (Unchanged): 4 x HGST Ultrastar 7K6000 4TB 3.5" SATA 6Gb/s

Existing hard drives (Unchanged): 4 x 4TB WD Reds I already have to be put into the server once I transfer the data across

Cache drive (Changed and purchased separately): 2 x Crucial MX 500 500GB
GPU (Unchanged): MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5
Other (Changed): The Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP is no longer available at the vendor. They will provide me this part and install instructions if and when I purchase another JBOD enclosure

USB Hub (Changed): I purchased 2 hubs as these were reasonably cheap and low profile. Hoping either one of these work. I will update when I put it together. The parts are Ableconn PEX-UB132 and FebSmart FS-U4L-Pro

 

Questions:

  1. The motherboard has 1 x8 PCIe slot in x16 and 2 x8 PCIe slots in x8. Is it at all possible to put 2 GPUs inside? I ask because I want one available to a windows VM and one available for a Plex docker. I dont see any x8 GPUs and the motherboard only has 1 x16 sized GPU. What can I do here?
  2. Something I didnt think about was IOMMU. I understand that the motherboard needs to support independent IMMUO groups. Do Supermicro motherboards typically support this? Does anyone have any specific experience with the X10SRH-CF?
  3. I was also told that its actually difficult to pass through NVIDIA GTX GPUs to a windows VM. I will watch the video from spaceinvaderone but just checking if anyone has specific experience with the MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5
  4. Does anyone have experience passing through either the Ableconn PEX-UB132 or FebSmart FS-U4L-Pro

 

Thanks to everyone for reading / helping.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.