Jump to content

Another Atlas build with option to run a host as firewall/VPN appliance question


nimashet

Recommended Posts

Posted

Hi Guys,

Long time lurker / ready to build my unRAID box.

 

I was originally planning on running straight unRAID 4.7 on Raj's 20-drive rackmount beast build and life would be good. Then I had the misfortune to read Johnm's Atlas build and that had the little hobgoblins in my head jumping up and down.

 

I recently switched jobs which requires me to become very familiar with linux distros, especially those used in firewall appliances like Endian, Pfsense, ClearOS, etc. So I was going to just use an old Dell Optiplex 745 (with an additional NIC that I have to buy) as my test box. After reading Johnm's post, it makes much more sense to try running unRAID in ESXi.

 

I would need the following hosts:

- unRAID (critical)

- linux distro like Endian or ClearOS as a firewall / VPN appliance (critical)

- extra linux hosts for plaing around (not critical)

- Windows XP (sand box - not critical)

- Windows PDC (not critical)

 

Here is what I have:

Case - Norco 4220 (PURCHASED)

Mobo - Supemicro X8SIL-F-O (PURCHASED)

CPU - Intel i540 (PURCHASED)

PSU - Corsair 650W (PURCHASED)

SATA Cards -  SUPERMICRO AOC-SASLP-MV8 x1 (PURCHASED)

Hard drives - mixture of Seagate / WD 5900/5400 rpm drives (already had)

Memory - Kingston 2x4 GB ECC (PURCHASED)

 

Now, I understand that my CPU will not work. I talked NewEgg into swapping i3-540 for a Intel Xeon.

 

Here are my questions:

 

1. Can I use the 2 NICs on the SuperMicro boards to run a linux host as a firewall / VPN appliance (needs to NICs, obviously)? If the answer to this is No, then rest is moot. I would in all probability just stick with unRAID 4.7 only.

 

2. Should I change the CPU / mobo from i3-540 / X8SIL-F to Xeon x3440 + X8SIL-F or go with E3-1230 + X9SCM-F combo?

 

3. Johnm used a 120Gb SSD for the hosts. I would like to use a 32-64Gb SSD and a 1+ TB notebook 5400 rpm drive as my datastore. Will that work?

 

4. I want to use 19+1+1 (cache is if needed) drives for unRAID. Johnm suggested to put all unRAID drives on the SATA cards. If I buy 2x M105 IBM cards and flash it to LSI IT mode and put all the drives used by unRAID on them and the MV8 (last 4 drives), will  that work? I will put my SSD and the datastore drive on the motherboard SATA ports.

 

5. What would be the most stable beta I would have to go with the above hardware?

 

Posted

Here are my questions:

 

1. Can I use the 2 NICs on the SuperMicro boards to run a linux host as a firewall / VPN appliance (needs to NICs, obviously)? If the answer to this is No, then rest is moot. I would in all probability just stick with unRAID 4.7 only.

 

see http://lime-technology.com/forum/index.php?topic=15458.0

 

Yes, two of the three can be used by ESX. The VMs can have as many NICs as you wish. They are all virtual. VMWare likes to complain about a single NIC, but there is not true requirement for multiple. A single NIC can bring in multiple vlans, to a single vSwitch. Adding a NIC card (single, dual, quad) is easy.

 

2. Should I change the CPU / mobo from i3-540 / X8SIL-F to Xeon x3440 + X8SIL-F or go with E3-1230 + X9SCM-F combo?

 

You just need VT-d working.

 

3. Johnm used a 120Gb SSD for the hosts. I would like to use a 32-64Gb SSD and a 1+ TB notebook 5400 rpm drive as my datastore. Will that work?

 

Yes, it will work. I do not use SSDs as performance requirement is not that great. The 1TB 5400 will be a bit slow, I use 7200.

 

4. I want to use 19+1+1 (cache is if needed) drives for unRAID. Johnm suggested to put all unRAID drives on the SATA cards. If I buy 2x M105 IBM cards and flash it to LSI IT mode and put all the drives used by unRAID on them and the MV8 (last 4 drives), will  that work? I will put my SSD and the datastore drive on the motherboard SATA ports.

 

You can use the SAS Expander with a single card  - again performance, the M1015+expander gives me all the performance I need.

 

Posted

1.

 

Yes, you can NICs straight to a guest as long as you have VT-d enabled. you can also use virtual nics just fine for testing or production.

 

2.

 

The X8SIL is a fine board, if you have it and the ram for it. keep it. just swap the CPU.

 

3.

 

you do not need SSD's at all. i have some very high IO applications running that would cripple a spinner. for you *nix test stations. a spinner is quite fine as they wont be doing much read/write.

 

4.

 

if you have the SASLP-MV8. stick with those. they work perfectly fine with a minor hack. also the m1015's do not work in 4.7, so that is an issue for your "critical".

 

4.5

 

yes the expander would work fine with an m1015. it is reported as working with a SASLP-mv8, but at your  parity checks would be very slow due to  card speed.

I am going to install mine later today. it came a week ago and just sitting there in a box. i'll post my finding on the atlas thread once i have more data on it.

Posted

4. if you have the SASLP-MV8. stick with those. they work perfectly fine with a minor hack. also the m1015's do not work in 4.7, so that is an issue for your "critical".

 

Hi Johnm,

What minor hack are you referring to? Are you talking about gfjardim for the AOC-SASLP-MV8's post you referenced in your Atlas' build?

Posted

4. if you have the SASLP-MV8. stick with those. they work perfectly fine with a minor hack. also the m1015's do not work in 4.7, so that is an issue for your "critical".

 

Hi Johnm,

What minor hack are you referring to? Are you talking about gfjardim for the AOC-SASLP-MV8's post you referenced in your Atlas' build?

 

Thats the one. Or you can continue reading the Atlas post and he repeats it there.

Posted

So you guys recommended SASLP-MV8 to Nimashet because he needs to run 4.7. However, once 5.0 goes gold, would it not make more sense to put in 2x M1015's (16 drives) and use SASLP-MV8 for the remaining 4 drives, assuming he is going to put in 20 drives?  Or should the last 4 drives be attached straight to the motherboard?

Posted

So you guys recommended SASLP-MV8 to Nimashet because he needs to run 4.7. However, once 5.0 goes gold, would it not make more sense to put in 2x M1015's (16 drives) and use SASLP-MV8 for the remaining 4 drives, assuming he is going to put in 20 drives?  Or should the last 4 drives be attached straight to the motherboard?

 

It is all hit or miss. and what you like and what is on sale at the time.

 

By the time 5 comes out, who knows what the "Preferred HBA is."

 

I liked the M1015's because they were about 1/2 the price of the MV8 with over twice the performance.

Since then, the price of the m1015 has gone up making them not as great of a value. in addition to the difficulties of flashing them and with the poor support in 13 and 14. I'll assume that 15 will fix that.  It is still a solid card in my opinion. It is just not as cost effective these days and can be difficult to flash.

 

The saslp-MV8 is also a solid card. it is just a bit older and bit slower with 8 drives on it during parity checks..

but, it works right out of the box and works in all supported unraid versions. this is a much better choice for less technical people, those that do not want to run a beta or for those that just want it to work out of the box.

 

In a few months we will see the PCIe 3x and HBA cards later this year. is my guess.

 

In addition' there is the SAS2LP-MV8 out there, it is the replacement for the SASLP-MV8. for now I have not tested this card, but it looks like once 5 comes out it might be a good choice. for now I'm indifferent it to due its limited version support in unraid. It is a faster card then the original, but the driver suport is only versions 5Beta 11 & 13-14's and its possible 12-12a works but could be buggy with this card. Unless you are cutting edge, I'd wait and see on this card personally. I know some people are having success with it. but, when it comes to a server, you want solid stability.

 

Mixing and matching does work fine. I was running mixes M1015's and MV8's until beta 12-14 broke one card or the other.

once that fixed it will be fine to do that again. for now, I am running  12 drives through a single M1015.

 

 

Get what you can afford and what works for you.

 

 

 

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...