2 unRaid Servers


jbrodriguez

17 posts in this topic Last Reply

Recommended Posts

i'm reaching the the physical limit of my norco 4220 unraid box, i have about 3 slots left to add drives.

 

this has prompted me to consider what to do ... i can upgrade the drives (mostly 2tb to 3tb, provided 5.0 is rtm sometimes soon :)

 

or i can build a second server ... what i would like to do is to run unraid under esxi in order to consolidate some server duties into one virtualized box

 

so i'd end up with a physical unraid and a virtualized one ...

 

is there anyone with 2 or more unraids .. how are you managing this ?

Link to post

I am working on this right now myself.

 

I have 2 unraid servers.

 

One is my primary media server and some storage.

The second one is just a back up of the first one and back ups of some of my other raid arrays.

 

I have now taken the second server offline and am migrating it to a VM on ESXi.

 

After the migration, I am probably going to make the virtual one the primary since that server will on 24x7 do to other hosts also on it.

 

the now primary unraid server will become my secondary unraid server, I will also probably have it standby or power off when not in use (most of the time i hope).

 

Link to post

You could also virtualize the whole thing - use a port expander to connect your system to drives in an external chassis and run two virtualized instances of unRAID.

 

This build might give you some inspiration: http://lime-technology.com/forum/index.php?topic=13272.0

 

thanks .. i guess i need to learn abour port expanders , i didn't know that unraid supported having the physical disks in another chassis

 

if that's the case ... i "should" be able to have two norcos for storage (hard disks only) and a third machine as the esxi server, running two virtualized unraids ... right ?

Link to post

I am working on this right now myself.

 

I have 2 unraid servers.

 

One is my primary media server and some storage.

The second one is just a back up of the first one and back ups of some of my other raid arrays.

 

I have now taken the second server offline and am migrating it to a VM on ESXi.

 

After the migration, I am probably going to make the virtual one the primary since that server will on 24x7 do to other hosts also on it.

 

the now primary unraid server will become my secondary unraid server, I will also probably have it standby or power off when not in use (most of the time i hope).

 

 

johnm, hopefully you can document what parts you used to make it happen.

Link to post

You could also virtualize the whole thing - use a port expander to connect your system to drives in an external chassis and run two virtualized instances of unRAID.

 

This build might give you some inspiration: http://lime-technology.com/forum/index.php?topic=13272.0

 

thanks .. i guess i need to learn abour port expanders , i didn't know that unraid supported having the physical disks in another chassis

 

if that's the case ... i "should" be able to have two norcos for storage (hard disks only) and a third machine as the esxi server, running two virtualized unraids ... right ?

 

Yes you can. in theory, you only need 2 norcos, no need for a "head" server. you can house the motherboard in one of the 4224's... unfortunately, the price of Expanders, Expander Aware HBA's and cables will cost about the same if not more then building 2 servers.

 

When i priced out that option, the HBA's were close to $300 each (need2), the Expander was $350ish (need2) and the interconnect cable was $80! You also needs some other cables and a special mobo for the second chassis that was actually dirt cheap.

the intel expander is cheaper but lacks the external connecter of the chenbro one.

 

As you start your shopping list and breaking it down... it adds up fast.

I went with the 2 separate systems and saved some cash (but not electricity).

 

 

Yeah, I was thinking about a post on my experiences of virtual unraid (with a shopping list).

I just do not have time this week.

 

Link to post

i've started reading about this ...

 

this ServeTheHome article did prove an eye-opener (i didn't quite get the follow up article)

 

so im thinking of a setup such as this

current norco 4220

+ hp sas expander ($320)

+ cheap mobo ($40)

+ 500w psu ($70)

 

new norco 4224 ($ 400)

+ hp sas expander ( $320)

+ cheap mobo ($40)

+ 500w psu ($70)

 

"head" server

+ 2 x Supermicro AOC-SASLP-MV8 (as per this HardForum thread, can be used as HBAs) ( $220)

 

so it's about $ 1,400 in gear not including "head" additional gear (that would run about $ 600): a grand total of aprox $ 2,000 8)

 

what do you think ? any pros/cons

 

Link to post

The biggest problem I see is that at the moment unRAID only supports up to 22 drives (1 parity, 20 data, 1 cache).  A second DAS server won't help you overcome that limitation.

 

thanks Rajahal, yes,  i would need to run 2 virtualized unraids ... i'm still investigating, will post any new findings

Link to post

The biggest problem I see is that at the moment unRAID only supports up to 22 drives (1 parity, 20 data, 1 cache).  A second DAS server won't help you overcome that limitation.

 

...in terms that the no. of drives supported is defining the limit to the amount

of storage available.....in a virtualized environment this limitation can easily be overcome.

I have successfully tested "drives" with 4TB and 6TB in unRAID 5.7beta (and above)  ;D

 

Link to post
  • 2 weeks later...

In playing around, I have defiantly come to the conclusion this would be rather easy with the correct hardware.

 

Run ESXi

install 2 unraid VM's each with their own flash drive.

Have 1 HBA/Expander for each unraid.

 

The cheapest config would be 2 Norco's 4224's

Norco #1

Power supply

Main ESXi Motherboard/Ram/CPU

Cheap HBA for UR1 (IBM M1015 maybe)

Cheap Expander for UR1 (Intel RES2SV240 maybe)

Cheap HBA with external SAS for UR2 (LSI LSI SAS 9212-4i4e maybe)

22 drives

 

Norco #2

Power Supply

PE-2SD1-R10 for a motherboard. (or one from scrap pile)

HP or Chenbro Expander with external SAS input.

22 drives

 

Extra bits

1 SFF8088 cable to tie the 2 boxes together.

extra NIC cards for bandwidth issues..

 

While this beast would be fun to build...

All the specialty parts would cost more then 2 of RAJ's 22 drives beasts..

But it would be possible...

 

if you pockets are deep. you could run 4 22 drive unRAID builds off one ESXi box...

 

I wonder how 4x 22 drive parity checks at once would impact that ESXi box.

not to mention the electricity draw.

 

Link to post

I wonder how 4x 22 drive parity checks at once would impact that ESXi box.

not to mention the electricity draw.

 

i'm not too keen on trying to check that out ;)

 

thanks for your atlas thread ... it's very enlightening

 

i'm still mulling over an AOC-SASLP-MV8 for HBA, paired with a 1-Port SFF-8087 to SFF-8088 (losing an expansion slot in the process) ... but yeah, pretty much everything else you said should be what i'd choose in the end

Link to post

Ack!

 

Each of your DAS boxes cost:$1108.29

That's the exact same price as RAJ's 22 drive beast...

And that is only for a 20 drive DAS.

 

for all 24 drives, it shoots up another $200 or so.

You have to drop the res2svs40 and adapter, then add a chenbro or HP HP 468406-B21 and 6 8087 to 8087 cables (the intel card comes with 6, the other brands dont?)

 

I am assuming unRAID sees the LSI 9212.

Actually... the 9212-4i4e has 4 internal SATA ports. you could mount the parity/Cache drive for the DAS in the main box.

 

you could save a few bucks by dropping the LSI 9212 and get an $80 m1015 on Ebay and a second SFF-8087 to SFF-8088 Adapter.

 

I would not use the saslp-mv8.. i think parity check on 20 drives off a single 3GB port would hurt.

 

1 unRAID DAS expansion box for ESXi:

Norco 4224 (or 4220 same price) $399.99

CORSAIR CX500 V2 500W         $60.00

PE-2SD1-R10                                 $37.80

Intel RES2SV240 w/6 8087cables $275.00

LSI SATA/SAS 9212-4i4e 6Gb/s         $259.00

SFF-8087 to SFF-8088 Adapter         $29.50

SFF-8088 to SFF-8088 cable         $47.00

 

i'm technogeek enough to build one...

especially since i have 2/3's the parts

 

Link to post
  • 2 months later...

Ack!

 

Each of your DAS boxes cost:$1108.29

That's the exact same price as RAJ's 22 drive beast...

And that is only for a 20 drive DAS.

 

for all 24 drives, it shoots up another $200 or so.

You have to drop the res2svs40 and adapter, then add a chenbro or HP HP 468406-B21 and 6 8087 to 8087 cables (the intel card comes with 6, the other brands dont?)

Please corrected me if i am wrong, but would a cleaner solution be to run 2 unRAID VM off a supermicro SuperChassis 847E16-RJBOD1 box instead of adding a new norco 4224 for each box?

http://www.supermicro.com/products/chassis/4U/847/SC847E16-RJBOD1.cfm

 

If you ran 2 LSI 9212-4i4e on your ESXi box and pass one to each instence of unraid could this theortically work?

 

Tell me if i am completely missing something.

 

Thanks in advance

Link to post

 

Please corrected me if i am wrong, but would a cleaner solution be to run 2 unRAID VM off a supermicro SuperChassis 847E16-RJBOD1 box instead of adding a new norco 4224 for each box?

http://www.supermicro.com/products/chassis/4U/847/SC847E16-RJBOD1.cfm

 

You're totally wrong!

 

Ok, no your not wrong. cleaner in a data center, yes.. absolutely!

 

in the house? the wife might leave you.. that server would need a forklift to lift it.  the front back configuration is not practical if not racked. it would probably blow your eardrums out.

not to mention it wold be over 2K delivered. If you have a space in your basement/garage for a rack go for it.

I guess if you are making a 60 drive array you would have a rack or space.

 

The prices on some of my example parts are much lower now. i have seen that expander for $209 and the $50 cables for about $20 each.

 

 

If you ran 2 LSI 9212-4i4e on your ESXi box and pass one to each instence of unraid could this theortically work?

 

Tell me if i am completely missing something.

 

Thanks in advance

nope, you missed nothing. Assuming that controller is unraid compatible, that is a perfect controller.

you could use one for each unraid guest/VM.

you could do the same with a $80 M1015 Ebay special if you convert one SFF-8087 port to an external SFF-8087 port with the adapter plate (not as pretty as the 9212-4i4e, but cheaper).

(You could also stack the 4224's and cut the bottom out of the upper one and modify the lid of the lower one.. (i wouldn't do it personally.))

 

 

you could then use a norco RPC 4220 ($100 less then the pc-4224) for each unraid DAS.

you would put your 20 data drives in each DAS box along with a PSU, an expander card and one SFF-8088 to SFF-8087 adapter (or expander with one built in. HP or Chenbro).

you could then put the parity/cache drive for each guest unRAID in your "head unit" (which could be a cheapo $25 bluelight special case for all we care)

That would make each DAS about $650ish complete? (my previous price guess did include the LSI card though)

 

$299 4240

$210 expander

$30 SFF-8087 to SFF-8088 Adapter

$80 for cabling including 8088 external

$50 PSU

 

you just need to make a power hack, find a scrap mobo in the junk pile or buy a PE-2SD1-R10 (free, free, or $30)

 

If Tom really does let you link multiple unraids as one giant unraid share..... you could then have a 120TB server share with 2 unRAID VM's.. 240TB with 4 (I don't know how many servers would be the limit)

Link to post
  • 2 years later...

Using the Supermicro X9SRL-F you can have easily 6x M1015 in each of the 8x PCIe ports connected to 12 external Norco 4224 boxes which gives you a 12x24 drives. The 7th PCIe slot in the X9SRL-F is a 4x PCIe and there one can put a 7th M1015 hooking only one of its channels to an expander to power the internal drives in the "head unit". Makes up for a total of 13*24 = 312 drives.  :P

 

One can even put a dual Xeon board in the head unit, something like the Intel Server Board S2600IP4

http://ark.intel.com/de/products/56337/Intel-Server-Board-S2600IP4

 

Then, you virtualize all the drives, stripe and mirror them and present only 20 giant data drives to a single virtualized unRAID instance!!

Using 4TB physical drives, making each virtualized disk as big as 28TB (striped and mirrored)... and you won't even need parity drive in unRAID 8)

Link to post

Then, you virtualize all the drives, stripe and mirror them and present only 20 giant data drives to a single virtualized unRAID instance!!

Using 4TB physical drives, making each virtualized disk as big as 28TB (striped and mirrored)... and you won't even need parity drive in unRAID 8)

 

yeah, FordPrefect previously mentioned virtualizing the disks to create larger ones ... how does one do that ? lvm ?

 

you're saying no parity because mirroring supports one failure per pool (talking in zfs terms) ?

 

the striped and mirrored you're referring to is raid 10 ? i'm currently running a zfs striped mirror pool (which is similar to raid 10), only downside is you lose 50% space.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.