ML350p Gen8 Barebones


M0zza

Recommended Posts

All,

 

I hope that you can help me...

 

I have acquired a barebones ML350p Gen8 for a pittance. Unfortunately there are no CPU's, RAM, PSU's, or even the HD Cage.

 

I should have done a little more research before acquiring it. But I didn't, but I have tried now and am still struggling a little. Now PSU's can be had from several places online for not too much money, memory is out there in abundance, and I can get some nice multi-core CPU's for not too much either. I have even read that the Raid card has a HBA option.

 

But HP can be ArzeHouls when it comes to making money (had a HP Printer in the UK, took it to Oz and couldn't buy the ink there as it was region specific!) So...

 

1. Cooling for the CPUs... Do I need to use HP specific cooling or can I make use of other LGA2011 cooling options? (Noctua/BeQuiet)

 

2. Hard Drive cages... These don't seem to be a regular size.

a) for HP specific spares does it have to be a Gen8 cage or will others fit (say a Gen6)?

b) can other non-HP enclosures be made to fit?

 

3. The fan rack... I have heard that these can be loud. Can the fans be replaced with non-HP fans? Or bypassed altogether for a different cooling option?

 

My needs for the server are not substantial (Half a dozen dockers, Linux VM and Win7 VM not running at the same time) And I really want this to replace my current UnRaid Server (Ryzen 5 1600 in a cheap nasty case). But I don't want to end up spending an absolute fortune on HP spares to get this up and running.

 

Any help ideas would be greatly appreciated

 

M0zz

 

 

Link to comment

So...my server is a ML350p Gen8.

 

CPU: 2x E5 2667v2 (originally E5 2670)

MEM: 128GB 1600Mhz RDIMM HPE

HD Controllers: p420i 2GB FBWC in raid with 8x2tb. 

LSI SAS 9211-8i HBA with 4x12tb spinners 3x480gb SSD

NIC: HPE ETHERNET 10GB 2-PORT 561T (onboard 4port 1gb not in use)

 

Now why I started with So...

 

Getting hard drive cages was a pain...They are super overpriced and hard to get working together. Gen8 they stopped using SAS expander add on cards and integrated the expanders into the cage backplanes. I however found decent deals on cages that did NOT have the expander backplane but the regular backplane....Onboard p420i cannot handle more than one regular backplane cage...I did not know this.  However after hours of research I found that the p830 controller can do this with 68pin "wide" sas to dual sas connectors. Again however you can only do two "regular backplane" drive cages in this setup.  If you can get the expander style cages then you can use the onboard p420 for up to 3 cages either 3x8 2.5 SFF or 3x6 3.5 LFF setups. Some places want 500$ USD or more for these cages. Gen8 is Gen8...you can use ML350E cages however which are non SAS backplane and great for HBA addons. You have to use the trays as well. I use hotswap trays for all my drives as they were cheaper than NON hot swap trays....ML350E cages are cheaper and fit but no SAS backplane. 

 

Everything is "enterprise" and proprietary as can be. Your add on cards will make the system react weird because it's not "HP" branded cards. 

 

https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=c04128239  if the part number isn't listed on here. It's going to make your system funky...but it will still work. Mine just has elevated fan speeds. They won't go under 20% speed. IF I remove all the unbranded add on cards it'll idle at around 6-10% fan speed.

 

Fans are proprietary and MUST be installed. Especially if you have dual CPU. It will run at full speed if you take one fan out or if it fails. At full speed it sounds like a jet. 

 

You can probably rig other coolers but why? The box has a plastic air diverter to fit perfectly in the box. It works great at cooling. 

 

I cannot for the life of me get VM GPU passthrough to work properly on this system in Unraid. It will force reboots to reset the cards and yes I have read the community extensively and watched spaceinvaders great vids.. Same GPU on a intel b250 chipset system works flawless for VMs....

 

Right now I run ESXi 6.5 with unraid as a guest. I use ESXi for all my guests and is great. I passed through the LSI HBA, one of the 10G nic ports and away it goes. No hiccups. In ESXi 6.5 at least you can boot via USB like normal. Just add USB device. In 6.0 and 5.5 it doesn't seem to work right but I can get it to work, just have to manually hit enter on guest bios.

 

You're in the enterprise HPE world where drivers and support is PAY. Gen8 is out of warranty and unless you have a paid contract for support you can't even download drivers without a headache...

 

Processors can be had for 40 bux. Ram is kind of expensive but can be had for about 20-40$ per 16GB. SAS drives can be had for 10-20bux per 2TB. I think I paid 150 shipped for 10x2TB. 

 

My systems a beast but it wasn't cheap. Same $$$ I'd probably build something else next time and get more, I mean no native Usb3.0 and DDR3 ram? It's old. However I'm happy with my setup. Enterprise level equipment has so many awesome features and pluggability. Once your drive cages are mounted it's a tool less system. 

 

I can answer more questions. I'm sorry if I jumped all over. I just went through this gen8 setup in the last couple months. It's a headache I'd recommend skipping for the impatient.

  • Thanks 2
Link to comment

 

Right, so I think this is going to be a slow progress build... I am not really in a position to shell out a lot on this gear. So I think I will have to keep an eye out for bargains.

 

The main issue with this build is going to be the CPU heatsinks, which I am not happy to pay 60 quid for, and the drive cage(s). I may try and get hold of a backplane and fabricate my own, non-hotswap cage. In true Top Gear fashion 'How hard can it be?'

 

You mentioned the ML350e drive cages. Are these not 4 drive cages and so too small for the ML350p?

 

Cheers,

M0zz

Link to comment

Only reason I run it as a guest on ESXi is because of VM issues. The main issue being GPU passthrough. 

 

I run ESXi via the onboard SD card. I run Unraid via a usb add on controller card. The usb addon allowed for me to pass it through without messing with the built in usb hosts. I do run VMs on Unraid still. I have a Win10 VM on Unraid. Runs flawless. I have recently started playing with spaceinvaders Macinabox as well on the nested Unraid. I just don't pass a GPU.  

 

I have passed GPU through ESXi to Unraid for the linux.io Nvidia version of Unraid and that does work for transcoding. I did not try to double passthrough though. Now that I think of it I'll probably try it for fun.

Link to comment
  • 2 weeks later...

 

I pick up my server on Friday, so hopefully can get a good look at the beast and work out whats what. I have found a place for cheap CPU kits (and cheap CPUs if the Heatsinks are included), cheap memory and cheap PSUs. So I should have it up and running in a short amount of time. Just not with Unraid to begin with. I am going to play with ESXi and look into setting up a couple of VMs. 

 

Once I acquire a (some) cage(s) I will probably have a lot more questions, but I have several now, more to do with your setup (and ensuring I know which hardware HP is happy with), if you will indulge me... 

 

- Which usb add-on card are you using?

- I have just noticed in your setup you are mixing SFF and LFF drives. Is this only allowed due to the second SAS controller?

- Also which brand of LSI controller did you get? I thought non-HP cards would have the fans going crazy?

- how are your UnRaid LFF HDDs and SSDs installed? I assume you acquired an ML350e drive cage?

 

PS. I read somewhere that DL380 Gen8 Drive cages may be similar enough but require some physical alteration, unfortunately I can not find any information on the dimensions of any drive cages. It is a shame because the 8 SFF DL380 cages can be had for peanuts in comparison to the ML350.

 

Edited by M0zza
Typos
Link to comment
  • 5 weeks later...

So kinda late but I've since switched to bare metal Unraid. ESXi limitations for the free version didn't allow me to utilize my 32 cores....but I had ESXi working great. It just limited me to 8 virtual cores for Unraid which surprisingly wasn't enough for my usage...ESXi SD card is still there but I have USB boot first.

 

- Usb: FebSmart 4 Port PCI Express (PCIe) Superspeed USB 3.0 Card Adapter,2 Dedicated 5Gbps Channels 10Gbps Total Banwidth,Build in Self-Powered Technology,No Need Additional Power Supply (FS-2C-U4-Pro)

 

If you are virtualizing and passing through the usb controller. Get a USB card that has multiple channels/controllers, it'll see them separately. Then you can utilize it better. Not limited to ESXi but Unraid as well.  There are tons of controllers out there.

 

- I don't mix LFF and SFF. If I said that I apologize. The only SFF I have are the SSDs. 

 

- LSI SAS 9211-8i 8-port 6Gb/s Internal (IT-MODE) ZFS JBOD HBA / (IR-MODE) RAID is the card I use. 

 

- My Unraid drives are on the LSI controller. I pass the entire controller through and Unraid sees it just fine.  I have this cage. https://www.ebay.com/itm/HP-677433-001-ML350E-NHP-Drive-Cage/264486665179?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2060353.m1438.l2649

 

It is the Non hot plug version. Which the far right slots are just blocked off by a panel you can easily remove with a screw. I used an adapter to mount the SSDs to a LFF tray. 

 

ML350 SFF cages go for around 50-75 USD. I see DL380s go for about 25 USD.  I'd honestly probably go all ML350E Non hot plug cage after the fact. Opens doors for more customization. I use a  p830 controller with "wide sas" connectors that are SUPER hard to find. Finally got them from china.  Also if you ever wanted to get a backplane you can just adapt the ML350 NHP with a couple screws and mount the backplane. The cage itself is identical. 

 

The main reason I used ESXi is because of my graphics card issues. I haven't been needing this recently so I have bare metal Unraid. One GREAT part of Unraid is that it doesn't freak out if you change systems as long as the USB drive is the same. I can boot into my ESXi and virtualize Unraid with no issues. It just takes away my CPU cores and changes the amount of available ram. Which I have 384GB to distribute so it's not a big deal. I will also say while figuring everything out being in an ESXi host is much easier to work with. The boot process with 384GB ram and multiple drive controllers isn't fast, feels like 5-10 minutes. Restarting a VM is seconds. 

 

I also have two NVME drives installed now. The pci slots cannot do bifucation? I think that's how you spell it. The chipset claims the ability but the motherboard does not work. I had dual NVME on one controller and it would only see one. So now I have it in two slots. 

 

I do not have the crazy fan issues. They do hover at 20% all the time but I'm fine with that. I might pull the "unsupported" cards out one day to see if it'll go down but right now I utilize those unsupported cards..It would simply be to test.

Link to comment
  • 3 years later...
  • 8 months later...

old post. yes it works fine. 32 threads.384GB of ram. gtx 1650 for encoding. VM's can be a pain for pass through. Also PCI slots can't be "shared" so multi NVME host cards only see one NVME device. I wanted to run 4x NVME drives on one slot, no go. Too old for that.  I have 19 storage devices attached. Mixture of SSDs and SAS HDs. ZFS pool. SSD Cache pool. and then XFS array. 

 

I actually retired it a year or so ago. However my replacement started having lockups I couldn't figure out. So it got brought back alive. Uses a good 300W at idle....

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.