Yet Another ESXI Upgrade


techie.trumpet

Recommended Posts

I am considering converting my Unraid server into an ESXI host and moving Unraid into a VM.

 

I have been reading through everything I can find on the forum about converting to ESXI and it looks like my hardware should convert pretty easily, especially since the SASLP-MV8 pass through configuration has been sorted out.

 

Today if I only pass-through my two MV8 cards I will have access to 16 drives in Unraid (14 data drives, Parity and Cache) and given that I am only running 7 drives that leaves me a fair bit for expansion. 

 

In my norco case I have 4 of the backplanes connected to the MV8 cards and the 5th connected to the motherboard SATA connections with a breakout cable.

 

My plan for converting over to ESXI would be to add a couple 2.5" SSD or 7200RPM drives for my datastore and from what I understand I can mount them above the backplan inside the case, which would leave all of my hot swap bays free for Unraid.

 

All of this is a long winded way of getting around to my actual question. 

 

My Question:

 

After I hit my 16 drive limit of the 2 MV8 cards what direction should I take inorder to add the last 4 drives my case is capable of holding?

 

From my reading I could Raw Device Map the drives, however there are draw backs in maintaining the RDM mappings.

 

I could replace the MV8 cards with a single 20+ Expander, but that might not be the most cost efficient method of adding the 4 drives.

 

What I would like to achieve by virtualizing:

 

I recently built an ESXI server that is running of all of my primary service VMs. Virtualizing Unraid would free up my system resources to allow me to run secondary service VMs (such as a backup Domain Controller / Secondary DNS) and a few VMs that would be used occasionally for development testing, etc.

 

My Hardware:

Mainboard

SUPERMICRO MBD-X8SIL-F-O

CPU

(Current) i3 540 3.07GHz

(Upgrading to) Intel Xeon X3440 2.53GHz

Memory

Kingston KVR1333D3E9SK2/8G

Kingston KVR1333D3E9SK2/4G

Case

NORCO RPC-4220

Power Supply

Ultra X4 750-Watt Modular Power Supply Bronze 80+

SATA Controller

2x Supermicro AOC-SASLP-MV8

Drives

3x Hitachi Deskstar 2TB 7200 RPM

2x WD WD20EARS 2TB

1x WD20EARX 2TB

1x WD7500AAKS 750GB Cache Drive

Link to comment

You could use this card. It's a PCIe 4x card and will give you 4x SATAIII ports for those last four drives. I can't say for 100% sure it will work but it's based on the Marvell 88SE9235 chipset and others on here have cards with that chipset with success. It's only $49 bucks shipped on Newegg or $41 shipped on Amazon.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816124062

 

http://www.amazon.com/IO-Crest-Controller-Profile-SI-PEX40062/dp/B00AZ9T41M

Link to comment
My Question:

 

After I hit my 16 drive limit of the 2 MV8 cards what direction should I take inorder to add the last 4 drives my case is capable of holding?

Kind of depends on what you want.  One is cheaper by quite a bit.  The other allows more expansion options.

 

From my reading I could Raw Device Map the drives, however there are draw backs in maintaining the RDM mappings.
This is a cheap way to get what you want.  But like you said there are drawbacks.  It is harder to keep track of which drive is which since you don't get a serial # in unRAID and the drives will not spin down or produce temps as far as I know.  It might also be a little slower than a SAS expander but I could be wrong.

 

I could replace the MV8 cards with a single 20+ Expander, but that might not be the most cost efficient method of adding the 4 drives.
You will need more than just the SAS expander I don't believe the SASLP-MV8 supports an expander so you will need a card that does to plug the expander into - like an IBM M1015.  This is the route I took but for different reasons.  I wanted more slots available on my MB for other VMs so I wanted a way to get 24 drives out of a single card and MB slot.

 

If you are not concerned with conserving MB slots I would use mrow's third controller method.  With that you will avoid the pitfalls of RDM without the expense of a SAS expander and replacement controller.  This is what I would have done if I wasn't concerned with conserving MB slots.

Link to comment

At this point in time the only additional PCI/PCI-E device that I would want to have an open slot for would be to add an additional NIC to keep one dedicated to Unraid.

 

Seeing that I would primarily be running secondary service VMs like a secondary domain controller I imagine I can get by giving Unraid complete access to one NIC and use the second NIC for ESXI and the couple of VMs.

 

If performance starts to become an issue with my existing ESXI server I could end up moving a couple of VMs over to the server running the Unraid VM and it would be at this point that I would likely want to add a 3rd NIC to the system.

 

If I opted for the 4 port SATA controller I would still have a single PCI slot available that I could use for the NIC.  From my reading though it looks like the maximum throughput of the PCI bus is equal to the max throughput of a Gigabyte NIC.  Creating a virtual switch on this device could introduce a bottle neck.

 

If I was to go with the IBM M1015 what else would I need to consolidate down to using only the two PCI-E 8x slots?

 

I have seen reference to the M1015 cards selling between $60-80 and I would imagine an expander card runs in the $200-300 range?

Link to comment

At this point in time the only additional PCI/PCI-E device that I would want to have an open slot for would be to add an additional NIC to keep one dedicated to Unraid.

 

Seeing that I would primarily be running secondary service VMs like a secondary domain controller I imagine I can get by giving Unraid complete access to one NIC and use the second NIC for ESXI and the couple of VMs.

 

If performance starts to become an issue with my existing ESXI server I could end up moving a couple of VMs over to the server running the Unraid VM and it would be at this point that I would likely want to add a 3rd NIC to the system.

 

If I opted for the 4 port SATA controller I would still have a single PCI slot available that I could use for the NIC.  From my reading though it looks like the maximum throughput of the PCI bus is equal to the max throughput of a Gigabyte NIC.  Creating a virtual switch on this device could introduce a bottle neck.

 

If I was to go with the IBM M1015 what else would I need to consolidate down to using only the two PCI-E 8x slots?

 

I have seen reference to the M1015 cards selling between $60-80 and I would imagine an expander card runs in the $200-300 range?

 

 

Finding M1015 cards for that price any more is getting very hard. They are going for $150+ on ebay now. I found an online store selling them for $115 shipped so I hopped on it. A SAS expander in the price range is going to give you 16 total drives. This is a popular one: http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207

 

It's 6 SAS ports but keep in mind that is only going to give you 16 usable drives. Two of the SAS ports are for needed I/O from the M1015, the other four can be used to connect drives. Technically you could use only one of the SAS ports to connect to the M1015 and connect 20 drives to the expander but you might have performance issues. The upside with that expander is it doesn't need to be plugged in to a slot, you can power it off of the PSU. Since your board is mATX you should have some free brackets slots which you could secure the card in and power it off the PSU. You can then use one SAS port on your AOC-SASLP-MV8 to connect the remaining 4 drives. This solution will allow you to only use two slots but it's going to cost you $400+ dollars.

Link to comment

Thanks mrow,

 

That information helps a lot.  For the immediate future I know I can get by with the hardware I have however, the thought that the M1015 cards are selling for so much makes me want to watch for a deal to ensure I don't get pinched when I get to the point where I need more than 16 drives.

 

The next hard drive I add to the system is going to be a larger parity drive to allow me to start adding larger disks to the system.  Once that is out of the way I can get ready to watch for a deal on a card like the M1015.

 

I am not sure how much the MV8's fetch used but I could always sell the second card once I upgraded to a card like the M1015 and added an expander.

Link to comment

Is the card mrow mentioned known to be ESX compatible? ESX tends to be pretty picky!

 

I too am using m1015 cards, 3 of them. With drive sizes going up I don't see needing more than 16 drives and 2x m1015 give that now. The third is for another NAS product, FreeNAS now but am playing with OpenIndiana and Napp-it. Anyway, between m1015 cards and the mobo you ought to be set. I'd move the other cards out and skip the expander. The m1015 seem to be better supported and more heavily used.

 

Oh, I also wouldn't swear a dedicated NIC for unRAID. Frankly I share mine with all of my VM and see no hits I can attribute to it. UnRAID doesn't max out the port that's for sure :-(

Link to comment

At this point in time the only additional PCI/PCI-E device that I would want to have an open slot for would be to add an additional NIC to keep one dedicated to Unraid.

 

Seeing that I would primarily be running secondary service VMs like a secondary domain controller I imagine I can get by giving Unraid complete access to one NIC and use the second NIC for ESXI and the couple of VMs.

 

If performance starts to become an issue with my existing ESXI server I could end up moving a couple of VMs over to the server running the Unraid VM and it would be at this point that I would likely want to add a 3rd NIC to the system.

 

If I opted for the 4 port SATA controller I would still have a single PCI slot available that I could use for the NIC.  From my reading though it looks like the maximum throughput of the PCI bus is equal to the max throughput of a Gigabyte NIC.  Creating a virtual switch on this device could introduce a bottle neck.

 

If I was to go with the IBM M1015 what else would I need to consolidate down to using only the two PCI-E 8x slots?

 

I have seen reference to the M1015 cards selling between $60-80 and I would imagine an expander card runs in the $200-300 range?

 

 

Finding M1015 cards for that price any more is getting very hard. They are going for $150+ on ebay now. I found an online store selling them for $115 shipped so I hopped on it. A SAS expander in the price range is going to give you 16 total drives. This is a popular one: http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207

 

It's 6 SAS ports but keep in mind that is only going to give you 16 usable drives. Two of the SAS ports are for needed I/O from the M1015, the other four can be used to connect drives. Technically you could use only one of the SAS ports to connect to the M1015 and connect 20 drives to the expander but you might have performance issues. The upside with that expander is it doesn't need to be plugged in to a slot, you can power it off of the PSU. Since your board is mATX you should have some free brackets slots which you could secure the card in and power it off the PSU. You can then use one SAS port on your AOC-SASLP-MV8 to connect the remaining 4 drives. This solution will allow you to only use two slots but it's going to cost you $400+ dollars.

I have that card with a single cable from the M1015 to give me a total of 24 drives slots.  I currently am not running that many but I have as a test in the past.  Parity checks start at 100MB/s and drop to 65+MB/s at the end with 17 2TB drives all connected to the Intel SAS expander - 9 Hitachi Green and 8 WD Red on my unRAID VM.  I didn't feel that I needed the extra speed since it was 20MB/s maximum difference.  My writes to the array drives are 35MB/s or more.  Also I am using the ESXi virtual switch and don't notice any particular network related slow downs.  Plus I can use the 10Gbs connections between VMs this way.
Link to comment

You talk about using 2 drives for your datastore.  Were you planning to set them up in a RAID1 configuration?  If so, then you'll need to use a RAID card that's supported by ESXi such as the IBM M1015 and that will impact your decision about how to connect up the remaining drives.

Link to comment

You talk about using 2 drives for your datastore.  Were you planning to set them up in a RAID1 configuration?

 

If I setup two data store drives they will be independent of each other.

 

One drive would be a standard 7200 RPM drive to use with VMs were Disk IO isn't a concern (Secondary domain controller / dns).  I would also use this disk to store any ISOs needed to install VMs along with some local VM backups.

 

The other drive would be an SSD (with bios garbage collection).  I doubt I would initially need this drive as I am already running an ESXI server and virtualizing my Unraid machine would be to leverage the fact that my hardware is capable of much more than operating solely as an Unraid server.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.