ATLAS My Virtualized unRAID server


1227 posts in this topic Last Reply

Recommended Posts

 

If possiable, I would like to get your input about my setup.

 

What Beta said..

I would look into possibly getting a Supermicro X9SCM-IIF (Updated X9SCM-F with 2 ESXI compatible NICs and V2 Bios fr IVY Bridge CPU's). otherwise you might have problems if you do not have a compatible sandy bridge CPU about.

 

I would also consider buying 2x 8GB stick instead of 4 4GB sticks. unless you can reuse that ram, it seems silly to throw it away.

 

EDIT and he sniped in before me..

I updated teh front page yesterday to show that I am recommending the X9SCM-iiF now

I saw here http://lime-technology.com/forum/index.php?topic=17936.0 a solution to use the second nic (i haven't tested it myself yet though).

 

Any other reason to recommend the X9SCM-iiF, it's almost twice the price than the X9SCM-F?

 

 

 

 

Sent from my GT-P7500 using Tapatalk 2

Link to post
  • Replies 1.2k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

A brief introduction: I have a need to consolidate several servers into one box. this would include my main media storage (unRAID), my client backup server (WHSv1 Migrating to WHS2011), my Usenet d

Hardware Build and ESXi install.     Hardware install notes: Original Hardware unboxing   The 650Watt Corsair Power Supply pictured was not going to cut it. I used the spare 750watt Season

Posted Images

 

If possiable, I would like to get your input about my setup.

 

What Beta said..

I would look into possibly getting a Supermicro X9SCM-IIF (Updated X9SCM-F with 2 ESXI compatible NICs and V2 Bios fr IVY Bridge CPU's). otherwise you might have problems if you do not have a compatible sandy bridge CPU about.

 

I would also consider buying 2x 8GB stick instead of 4 4GB sticks. unless you can reuse that ram, it seems silly to throw it away.

 

EDIT and he sniped in before me..

I updated teh front page yesterday to show that I am recommending the X9SCM-iiF now

I saw here http://lime-technology.com/forum/index.php?topic=17936.0 a solution to use the second nic (i haven't tested it myself yet though).

 

Any other reason to recommend the X9SCM-iiF, it's almost twice the price than the X9SCM-F?

 

 

 

 

Sent from my GT-P7500 using Tapatalk 2

 

I just bought one from http://SuperBiiz.com/ for $189 and also 4 x the SuperTalent sticks (for total ram of 32gb) john posted about for $101 each...  I live near them so I can go kick their asses if they aren't for real... :)

 

Sent from my iPad using Tapatalk HD

Link to post

Hi

 

I already ordered the MB and Memory.

 

I cant risk it ordering from SuperBiiz since I live in Israel and I am getting the products through a 3rd party courier located in the states which supplies me a unique address and then he ships the items to me...

 

This is why I buy from amazon and pay the extra cost, SuperBiiz scares me :)

 

Anyway`s, from what I have read, the only difference is actually the NIC`s and the FW...

 

The FW is solvable, i`ll find a chip or something.

The NIC is solvable as well, I saw people who solved that issue.

 

I will have to deal with it ofcourse, but I like it :)

 

As for the memory, I didnt know about the SuperTalent deal and I already have the memory here...

 

I know its a bummer, but  i will try to sell it or something.

 

Anyways, thanks!

 

 

Link to post

Can't comment on the Tyan's.  Plenty of us have the Supermicro boards though and no issuse to speak of.

 

The reverse breakout cable is for connecting 4 (usually on board) SATA ports to a SAS connection (in this case, a backplane of a Norco case, which then runs 4 SATA drives off it).

The 1 to 7 power splitter is to connect to one Molex connection off the PSU and from that one connection, drive the 6 backplanes in the Norco case.

Link to post

I think I'll stick to the SuperMicro like the majority.

 

FOr the breakout cable, for a  system of 20 HDD, I'll basically need 5 of those, right?

 

Can't comment on the Tyan's.  Plenty of us have the Supermicro boards though and no issuse to speak of.

 

The reverse breakout cable is for connecting 4 (usually on board) SATA ports to a SAS connection (in this case, a backplane of a Norco case, which then runs 4 SATA drives off it).

The 1 to 7 power splitter is to connect to one Molex connection off the PSU and from that one connection, drive the 6 backplanes in the Norco case.

Link to post

Shlomi, that looks great - just note that your x9scm probably won't come with v2.0 BIOS

If his doesn't I would be willing to trade.  I bought a replacement X9SCM 2 weeks ago from NewEgg and got a 2.0 bios.  Was hoping to get 1.0C (what I had on my original bricked board).  1.0C is compatible with an AVerMedia Duet tuner in passthrough 2.0 is not.  The card completely disappears from the ESXi passthrough screen.  Put any other PCIe card in and it shows up.  I currently have two HVR-2250s but the Duet is better with multi path signals so want one non-HVR-2250 in VM.

 

Bob, Did you try 2.0a? It supposedly addressed some PCI issues that un-broke some of the ESXi passthrough quirks.

Link to post

Shlomi, that looks great - just note that your x9scm probably won't come with v2.0 BIOS

If his doesn't I would be willing to trade.  I bought a replacement X9SCM 2 weeks ago from NewEgg and got a 2.0 bios.  Was hoping to get 1.0C (what I had on my original bricked board).  1.0C is compatible with an AVerMedia Duet tuner in passthrough 2.0 is not.  The card completely disappears from the ESXi passthrough screen.  Put any other PCIe card in and it shows up.  I currently have two HVR-2250s but the Duet is better with multi path signals so want one non-HVR-2250 in VM.

 

Bob, Did you try 2.0a? It supposedly addressed some PCI issues that un-broke some of the ESXi passthrough quirks.

I believe that is what is on my replacement board but I will check tonight because I don't remember for sure - it is 2.0something anyway.  The replacement had the same problems as my bricked board.  I upgraded the bricked board from 1.0 to 2.0 and lost the Duet.  Tried to downgrade the bios back to 1.0C.  I ignored a signature warning in the bios burn process and now it will not boot at all no screens showing no post beeps just does nothing - so I bought a replacement hoping I would get a 1.0 or 1.1 bios.  Not sure the 1.1 would work either but I would try to downgrade again but pay attention to any warnings and abort this time.
Link to post

In regards to the motherboard TYAN S5510GM3NR,

 

reading the reviews on newegg, it seems many people are having issues with this board.

 

What do you think?

 

I would stick with a supermicro personally. BUT... That is my personal preference from working with them all day at work and home. A few people on this forum that use the board do like the board. It is solid for both unraid and ESXi.

If you cant get a SM board the Tyan should work fine. I have not personally used one.

 

The complaints on newegg about the tyan are all complaining about things we already know.. needs ECC unbuffered RAM and an SSI power supply.

All the C-chip server boards (Tyan, Intel, SM, Asus, ETC.) are like that.

Those are people that didn't have a clue. They then bash the board because they did not do their homework and bought the wrong parts (and they all put their tech level at max.. LoL).

 

Here is a short unbiased review of the board http://www.servethehome.com/tyan-s5510-s5510gm3nr-matx-sandy-bridge-xeon-lga1155-c204-motherboard-review/

Link to post

I have seen some posts where people had issues with M1015's in 2.0 abut work again in 2.0a.

 

I think i still have 1.0somthing on my factory sample board. I'll have to look.

we upgraded all of our boards at work already.

Link to post
FOr the breakout cable, for a  system of 20 HDD, I'll basically need 5 of those, right?

 

No, it's for motherboard SATA to backplane SAS.  You'd only want several of these if you had 20 SATA ports on your motherboard :)  What's your intent?  To build an Atlas-style build like mine or John's?  If so, you want controllers and SAS to SAS cables, like these:

 

http://www.ebay.com.au/itm/330624847262?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1497.l2649

 

A M1015 or a SASLP will run 8 devices on its own (4 per SAS port).  For 20 drives, you'd want 2 controllers plus the 4 motherboard ports, or 1 controller plus a SAS expander (or 3 controllers if you don't want to mess with the motherboard ports.)  I'd suggest putting aside an hour or two and read through this thread in detail (skip ppl's Q&A unless it's relevant and you'll get through it pretty quickly.)

 

 

 

 

 

Link to post

I think I'll stick to the SuperMicro like the majority.

 

FOr the breakout cable, for a  system of 20 HDD, I'll basically need 5 of those, right?

 

Can't comment on the Tyan's.  Plenty of us have the Supermicro boards though and no issuse to speak of.

 

The reverse breakout cable is for connecting 4 (usually on board) SATA ports to a SAS connection (in this case, a backplane of a Norco case, which then runs 4 SATA drives off it).

The 1 to 7 power splitter is to connect to one Molex connection off the PSU and from that one connection, drive the 6 backplanes in the Norco case.

 

The 7 to 1 power splitter really is horrid.. there are some alternatives listed in the first post.

 

the breakout cable (thats a reverse breakout cable you need) I am actually using none right now. they are use to connect the chassis backplane to the motherboards SATA connectors.

also note the backplanes are in groups of 4 and the onboard sata has 6 ports... You will most likely only need one if building an ESXi box.. 1 or 2 for plain unraid..

 

before you buy any... you need to plan your entire build.. then get what you need. they tend to be expensive.

Link to post

In regards to the motherboard TYAN S5510GM3NR,

 

reading the reviews on newegg, it seems many people are having issues with this board.

 

What do you think?

 

I would stick with a supermicro personally. BUT... That is my personal preference from working with them all day at work and home. A few people on this forum that use the board do like the board.

I have a third ESXi server that uses a Tyan S5512GM2NR so similar to the S5510GM3NR.

 

Things I like about my S5512:

[*]Full ATX board

[*]Has 5 PCIe slots including 2 1x for my TV tuners

[*]Bios upgrade didn't break ESXi passthrough status on PCIe TV Tuner card

[*]ESXi recognizes my AVerMedia Duet TV Tuner with latest Tyan bios - SuperMicro X9SCM-F with 2.0A bios does not

 

Things I DON'T like about it.

[*]Am use to the SuperMicro IPMI app not use to browser based Tyan version

[*]Shutdown from OS doesn't always work

[*]The restore AC power loss setting doesn't appear to work

[*]Last State, Always On cause the PC to just reboot when ESXi shutdown is selected

[*]Only sure fire way to shutdown box is with power switch or IPMI

[*]Did I mention that it is hard to shutdown!!!!

 

You can tell what my biggest complaint is.  If that would go away I would probably prefer the Tyan over the SuperMicro.  I've thought about getting some PCIe expansion devices to expand my PCIe slots in my X9SCM but I don't expect that would help recognizing the Tuner Card.  So I'm stuck with two good but not perfect solutions.  Since I don't shutdown the PC much the Tyan has the edge currently.

Link to post

In regards to the motherboard TYAN S5510GM3NR,

 

reading the reviews on newegg, it seems many people are having issues with this board.

 

What do you think?

I got the Tyan S5510GM3NR after John recommended it for an ESXi unRAID server. I've been running it 24/7 for the past eight months. Everything is smooth and sweet. All the nics are easy for a virtual newbie like me to set up. No start up or shutdown issues either. Right now I'm running an unRAID VM, a Win7 VM, a WHS VM, and an OpenVPN appliance using the M1015+expander. It's a beautiful board.:)

 

 

Link to post

this will look crazy but I can't get my VM to auto-start  :'(

 

I am interested in this as well. I read that auto-start was broken in the 5.0 Update 1

http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html

 

THANK YOU, I may not be crazy! I'm effectively running 5.0 Update 1 (VMware-VMvisor-Installer-5.0.0.update01-623860.x86_64.iso)

 

Update 1 was released in March, more than 3 months ago... Why isn't this fixed yet? It's kind of ridiculous!?

 

I can't revert back as explained in the Blog post as I have a 5.0 Update 1 clean install. There's a sketchy workaround in the comments but I would prefer not to mess with that. So I guess that I will have to start all over from scratch with a 5.0 install (VMware-VMvisor-Installer-5.0.0-469512.x86_64.iso)

 

Should I be careful about anything in doing this? (except for unplugging every drives when re-installing ESXi)

 

Looking at the Update 1 release notes, I should be fine downgrading; my OSX VM wasn't working anyway..

 

And then VMware is gonna release Update 2 / a fix in a week haha

 

You were right, the fix has been released http://blogs.vmware.com/vsphere/2012/07/vsphere-hypervisor-auto-start-bug-fixed.html

Link to post

Hi all,

 

First I will thank you all for a great thread! :)

 

I'm going to move from a big house to a smaller apartment and to make thing more practical I want to consolidate my servers to one server.

Today unRAID is running on Intel DG43NB, Intel E1400 in a Norco 4220. I was running WHS v1 until 4 months ago when I converted to unRAID. I also have a PowerEdge 2850 running ESXi v4.1 with a few VMs. Both servers are rather noisy (understatement) so they are located in the garage. ;)

 

The new server will run unRAID, pfSense, Ubuntu Server (PMS, SABnzdb, CrashPlan, MySQL, Transmission/uTorrent) and a few other VMs.

When building the new server I'm basically going to use the parts listed in the first post. Are those parts "up to date"?

My Norco 4220 is old. It does not have the SAS backplane but the SATA backplane instead. Because of this I can't use AOC-SASLP-MV8, right? Or are there adapters that can be used? If not, any recommendations on another card?

 

I read somewhere that there are memory limits on the free edition of ESXI 5.0. Is that correct?

 

Jon

Link to post

Hi

 

The server is UP and running, turns out that I got the board shipped with bios version 2.0

 

Unraid is running great!

 

I gave it 2 GB of ram.

I would like to create a swapfile on an SSD.

Since my unRaid does not have a SSD cache drive (I use WD Black) i would like to ask you guys

do you think it will be ok to RDM my SSD (datasource) and put the swap file on it?

 

the SSD will contain a WIN7 development evnrionment.

 

It might kill the SSD?

Link to post

I'm currently creating a VM server based on this setup in your original post Johnm.

 

I'm a bit confused though...

 

You say that with the Intel RES2SV240 and IBM M1015 you only uses one PCIe-8 port, but as far as I can see they would also need a PCIe-4 port?

Also, you say that with the above setup, you would recieve full mechanical speed on up to 24 drives. Would using more drives (if it was possible through unraid, the combo does allow you to attach up to 32 drives...) create a bottleneck?

 

With the combo, you attach 8 drives to the IMB card (on the PCIe-8 bus) and then attach the 16 cards to the Intel card (on the PCIe-4 bus) right?

 

I guess I'd have to pass through the 2 cards, and then use the slots on the motherboard for datastores for ESXi right?

Link to post

Hi

Have anyone tried Kingston KVR13E9/8HM 8GB modules on the X9SCM yet? If they work, the price seems to come down to reasonable levels.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16820239135

 

cheers,

Ketil

those will work fine. the price has really been up and down recently..

you can try these also http://www.superbiiz.com/detail.php?name=W16GE1333K

See what the is the better deal.

 

Link to post

hi Johnm,

 

can you explain the first fan you bought? for what, and which model?

Optional Bits:

Fans:

3x120mm Fan bracket ($20ish shipped)

3x "pressure optimized" Noctua NF-P12-1300 120mm fans I picked up for $15 each plus $5 shipping for all 3.

2x ARCTIC COOLING ACF8 Pro Pro PWM 80mm Case Fans back on the rear.

 

i assume the Noctua NF-P12-1300 120mm fans are for the fan plate inside the chassis, and the arctic for the rear of the chassis.

Link to post

Hi,

 

Great tutorial BTW! Myself and a friend followed your explanations and all went pretty well.

 

We are looking to get a Controller Card and were very interrested with Supermicro AOC-SAS2LP-MV8 as it support SATA3. Unfortunately we cannot find a good answer if it is working or not in ESXi 5 and unRAID 5b13 or later.

 

We are also thinking about the IBM M1015 but we can have the Supermicro cheaper than the IBM, and we are not so confident with flashing for IT-mode.

 

Is the Supermicro AOC-SAS2LP-MV8 compatible with ESXi 5 and unRAID 5b13+ ??

 

Thanks!

 

 

EDIT : We ended buying an IBM ServeRAID M1015 off eBay... Can't wait to have it installed! Thanks!

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.